Full Code of inference-labs-inc/dsperse for AI

main a71f618e5cd5 cached
72 files
939.0 KB
236.0k tokens
908 symbols
1 requests
Download .txt
Showing preview only (975K chars total). Download the full file or copy to clipboard to get everything.
Repository: inference-labs-inc/dsperse
Branch: main
Commit: a71f618e5cd5
Files: 72
Total size: 939.0 KB

Directory structure:
gitextract_ogh8afab/

├── .cargo/
│   ├── audit.toml
│   └── config.toml
├── .github/
│   └── workflows/
│       ├── integration_tests.yml
│       └── publish.yml
├── .gitignore
├── Cargo.toml
├── LICENSE
├── README.md
├── crates/
│   └── dsperse/
│       ├── Cargo.toml
│       ├── benches/
│       │   └── serialization.rs
│       ├── build.rs
│       ├── proto/
│       │   └── onnx.proto
│       ├── src/
│       │   ├── backend/
│       │   │   ├── jstprove.rs
│       │   │   ├── mod.rs
│       │   │   ├── onnx.rs
│       │   │   └── traits.rs
│       │   ├── cli/
│       │   │   └── mod.rs
│       │   ├── converter.rs
│       │   ├── error.rs
│       │   ├── lib.rs
│       │   ├── main.rs
│       │   ├── pipeline/
│       │   │   ├── channel_split.rs
│       │   │   ├── combined.rs
│       │   │   ├── compiler.rs
│       │   │   ├── dim_split.rs
│       │   │   ├── incremental.rs
│       │   │   ├── mod.rs
│       │   │   ├── packager.rs
│       │   │   ├── prover.rs
│       │   │   ├── publisher.rs
│       │   │   ├── runner.rs
│       │   │   ├── slice_cache.rs
│       │   │   ├── stage.rs
│       │   │   ├── strategy.rs
│       │   │   ├── tensor_store.rs
│       │   │   ├── tile_executor.rs
│       │   │   ├── tiled.rs
│       │   │   └── verifier.rs
│       │   ├── python.rs
│       │   ├── schema/
│       │   │   ├── execution.rs
│       │   │   ├── metadata.rs
│       │   │   ├── mod.rs
│       │   │   └── tiling.rs
│       │   ├── slicer/
│       │   │   ├── analyzer.rs
│       │   │   ├── autotiler.rs
│       │   │   ├── combiner.rs
│       │   │   ├── layernorm_fuse.rs
│       │   │   ├── materializer.rs
│       │   │   ├── mod.rs
│       │   │   ├── onnx_fold.rs
│       │   │   ├── onnx_proto.rs
│       │   │   ├── onnx_shapes.rs
│       │   │   ├── onnx_slicer.rs
│       │   │   ├── self_div_rewrite.rs
│       │   │   └── trace.rs
│       │   ├── utils/
│       │   │   ├── io.rs
│       │   │   ├── limits.rs
│       │   │   ├── metadata.rs
│       │   │   ├── mod.rs
│       │   │   └── paths.rs
│       │   └── version.rs
│       └── tests/
│           ├── integration_slice.rs
│           ├── schema_roundtrip.rs
│           └── sn2_contract.rs
├── deny.toml
├── docs/
│   ├── JSTPROVE_BACKEND.md
│   ├── overview.md
│   └── uv_packaging.md
├── pyproject.toml
├── python/
│   └── dsperse/
│       ├── __init__.py
│       └── cli.py
└── rust-toolchain.toml

================================================
FILE CONTENTS
================================================

================================================
FILE: .cargo/audit.toml
================================================
[advisories]
ignore = [
    "RUSTSEC-2026-0009", # time crate DoS via RFC 2822 parsing — transitive dep, not user-facing
]


================================================
FILE: .cargo/config.toml
================================================
[net]
git-fetch-with-cli = true


================================================
FILE: .github/workflows/integration_tests.yml
================================================
name: Integration Tests

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

permissions:
  contents: read

jobs:
  fmt:
    name: Rustfmt
    runs-on: ubuntu-latest
    timeout-minutes: 10
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
      - name: Install Rust toolchain
        run: rustup show
      - run: cargo fmt --check

  test:
    name: Rust Tests
    runs-on: ubuntu-latest
    timeout-minutes: 45
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
        with:
          submodules: true

      - name: Install Rust toolchain
        run: rustup show

      - name: Install protoc
        run: sudo apt-get update && sudo apt-get install -y protobuf-compiler

      - uses: Swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2.9.1

      - name: Test
        run: cargo test --locked --manifest-path crates/dsperse/Cargo.toml

      - name: Test (with python feature)
        run: cargo test --locked --manifest-path crates/dsperse/Cargo.toml --features python

      - name: Clippy
        run: cargo clippy --locked --manifest-path crates/dsperse/Cargo.toml --all-targets --features python -- -D warnings

  audit:
    name: Security audit
    runs-on: ubuntu-latest
    timeout-minutes: 10
    permissions:
      contents: read
      checks: write
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
      - run: rm -f rust-toolchain.toml && rustup install stable && rustup default stable
      - uses: rustsec/audit-check@69366f33c96575abad1ee0dba8212993eecbe998 # v2.0.0
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

  deny:
    name: Cargo deny
    runs-on: ubuntu-latest
    timeout-minutes: 10
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
      - uses: EmbarkStudios/cargo-deny-action@3fd3802e88374d3fe9159b834c7714ec57d6c979 # v2.0.15
        with:
          command: check bans sources


================================================
FILE: .github/workflows/publish.yml
================================================
name: Build and Publish to PyPI

on:
  push:
    tags:
      - "v*"
  pull_request:
  workflow_dispatch:

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

env:
  UV_VERSION: "0.10.8"
  MATURIN_VERSION: "1.12.6"

jobs:
  build-linux:
    if: >-
      github.event_name != 'pull_request' ||
      contains(github.event.pull_request.labels.*.name, 'test-build')
    runs-on: ubuntu-latest
    timeout-minutes: 60
    container:
      image: quay.io/pypa/manylinux_2_28_x86_64
    permissions:
      contents: read
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2

      - name: Set up Python
        run: echo "/opt/python/cp312-cp312/bin" >> $GITHUB_PATH

      - name: Install uv
        uses: astral-sh/setup-uv@5a095e7a2014a4212f075830d4f7277575a9d098 # v7
        with:
          version: ${{ env.UV_VERSION }}

      - name: Install system dependencies
        run: |
          dnf install -y protobuf-compiler protobuf-devel pkgconf-pkg-config perl-IPC-Cmd perl-Time-Piece clang-devel

      - name: Install Rust
        run: |
          curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly-2025-03-27
          echo "$HOME/.cargo/bin" >> $GITHUB_PATH

      - name: Extract version
        id: get_version
        shell: bash
        run: |
          if [[ "$GITHUB_REF" == refs/tags/v* ]]; then
            VERSION=${GITHUB_REF#refs/tags/v}
          else
            VERSION=$(grep -m1 '^version' pyproject.toml | sed 's/.*"\(.*\)".*/\1/')
          fi
          echo "version=$VERSION" >> $GITHUB_OUTPUT

      - name: Update versions
        run: |
          sed -i '0,/^version = ".*"/{s/^version = ".*"/version = "${{ steps.get_version.outputs.version }}"/}' pyproject.toml
          sed -i '0,/^version = ".*"/{s/^version = ".*"/version = "${{ steps.get_version.outputs.version }}"/}' crates/dsperse/Cargo.toml

      - name: Cache Rust dependencies
        uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
        with:
          path: |
            ~/.cargo/registry
            ~/.cargo/git
            target
          key: manylinux-2-28-cargo-${{ hashFiles('**/Cargo.lock') }}
          restore-keys: |
            manylinux-2-28-cargo-

      - name: Build wheel
        run: uvx maturin==${{ env.MATURIN_VERSION }} build --release --manylinux 2_28 -i /opt/python/cp312-cp312/bin/python3

      - name: Test wheel installation
        run: |
          uv pip install --system --python python3 target/wheels/*.whl
          python3 -c "from dsperse import slice_model; print('PyO3 bindings OK')"

      - name: Upload wheel artifact
        uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
        with:
          name: wheel-ubuntu-x86_64
          path: ./target/wheels/*.whl

  build-macos:
    if: >-
      github.event_name != 'pull_request' ||
      contains(github.event.pull_request.labels.*.name, 'test-build')
    runs-on: macos-latest
    timeout-minutes: 60
    permissions:
      contents: read
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2

      - name: Set up Python
        uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
        with:
          python-version: "3.12"

      - name: Install uv
        uses: astral-sh/setup-uv@5a095e7a2014a4212f075830d4f7277575a9d098 # v7
        with:
          version: ${{ env.UV_VERSION }}

      - name: Install system dependencies
        run: brew install protobuf llvm

      - name: Extract version
        id: get_version
        shell: bash
        run: |
          if [[ "$GITHUB_REF" == refs/tags/v* ]]; then
            VERSION=${GITHUB_REF#refs/tags/v}
          else
            VERSION=$(grep -m1 '^version' pyproject.toml | sed 's/.*"\(.*\)".*/\1/')
          fi
          echo "version=$VERSION" >> $GITHUB_OUTPUT

      - name: Update versions
        run: |
          sed -i '' '1,/^version = /{s/^version = ".*"/version = "${{ steps.get_version.outputs.version }}"/;}' pyproject.toml
          sed -i '' '1,/^version = /{s/^version = ".*"/version = "${{ steps.get_version.outputs.version }}"/;}' crates/dsperse/Cargo.toml

      - name: Install Rust
        uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable 2026-02-13
        with:
          toolchain: nightly-2025-03-27

      - name: Install Rust target
        run: rustup target add aarch64-apple-darwin

      - name: Cache Rust dependencies
        uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
        with:
          path: |
            ~/.cargo/registry
            ~/.cargo/git
            target
          key: macos-cargo-${{ hashFiles('**/Cargo.lock') }}
          restore-keys: |
            macos-cargo-

      - name: Build wheel
        run: uvx maturin==${{ env.MATURIN_VERSION }} build --release --target aarch64-apple-darwin
        env:
          MACOSX_DEPLOYMENT_TARGET: "11.0"

      - name: Test wheel installation
        run: |
          uv pip install --system --python python3 target/wheels/*.whl
          python3 -c "from dsperse import slice_model; print('PyO3 bindings OK')"

      - name: Upload wheel artifact
        uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
        with:
          name: wheel-macos-aarch64
          path: ./target/wheels/*.whl

  publish:
    if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
    needs: [build-linux, build-macos]
    runs-on: ubuntu-latest
    timeout-minutes: 15
    permissions:
      contents: write
      id-token: write
    steps:
      - name: Download all wheels
        uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
        with:
          pattern: wheel-*
          merge-multiple: true
          path: ./dist

      - name: Extract version from tag
        id: get_version
        shell: bash
        run: |
          VERSION=${GITHUB_REF#refs/tags/v}
          echo "version=$VERSION" >> $GITHUB_OUTPUT

      - name: Create GitHub Release with wheels
        uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.6.1
        with:
          name: Release ${{ steps.get_version.outputs.version }}
          files: ./dist/*.whl

      - name: Install uv
        uses: astral-sh/setup-uv@5a095e7a2014a4212f075830d4f7277575a9d098 # v7
        with:
          version: ${{ env.UV_VERSION }}

      - name: Publish to PyPI
        run: uv publish ./dist/*


================================================
FILE: .gitignore
================================================
# macOS system files
.DS_Store
.DS_*
tests/models/run
# macOS metadata
._*

# Python cache
__pycache__/
*.py[cod]

# Environment files
.env
.venv/
env/
venv/

# IDE/editor folders
.vscode/
.idea/

# Log files
*.log

# Byte-compiled
*.pyo

# Jupyter Notebook checkpoints
.ipynb_checkpoints/

# Python egg artifacts
*.egg
*.egg-info/
dist/
build/
eggs/
parts/
bin/
var/
sdist/
develop-eggs/
.installed.cfg

# ignore the models we test with
*/models/*/slices
*/src/models/*/slices/
*/models/*/model_metadata.json
*/src/models/*/model_metadata.json
*/models/*/analysis/model_metadata.json
*/src/models/*/analysis/model_metadata.json
*/models/*/run
*/src/models/*/run/
*/models/*/input.json
*/src/models/*/input.json
*/models/*/*.onnx
*/src/models/*/*.onnx
*/models/*/*.dsperse
*/src/models/*/*.dsperse
*/models/*/*.data
*/src/models/*/*.data


# Local virtual envs
python.venv/
.venv/
venv/

# Slice output directories
pitch-sliced/
*-sliced/

# Test output
tests/models/output/
/target
/crates/*/target


================================================
FILE: Cargo.toml
================================================
[workspace]
members = ["crates/dsperse"]
resolver = "2"

[workspace.package]
edition = "2024"

[workspace.dependencies]
serde = { version = "1", features = ["derive"] }
rmpv = { version = "1", features = ["with-serde"] }
rmp-serde = "1"
thiserror = "2"
clap = { version = "4", features = ["derive", "env"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
rayon = "1"
ndarray = { version = "0.17", features = ["serde"] }
tract-onnx = { git = "https://github.com/inference-labs-inc/tract.git", rev = "3cfae7f7" }
uuid = { version = "1", features = ["v4"] }
sha2 = "0.10"
tempfile = "3"
prost = "0.13"
pyo3 = { version = "0.24" }
jstprove_circuits = { git = "https://github.com/inference-labs-inc/JSTprove.git", rev = "87a1859f3487cf0fb9a463dbfd713b1df4827afc" }
jstprove_io = { git = "https://github.com/inference-labs-inc/JSTprove.git", rev = "87a1859f3487cf0fb9a463dbfd713b1df4827afc", package = "jstprove-io" }
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls", "json"] }
tokio = { version = "1", features = ["rt", "macros"] }


================================================
FILE: LICENSE
================================================
Copyright (c) 2025 Inference Labs Inc.

Source Access Grant
You may access, view, study, and modify the source code of this software.

Redistribution Conditions
You may redistribute this software in source or modified form provided that:
a) You retain this license document and all copyright notices
b) Any modified files carry prominent notices stating you changed them
c) You do not misrepresent the origin of the software

Usage Restriction
NO USE RIGHTS ARE GRANTED BY THIS LICENSE. Any operational use including but not limited to:
- Execution of the software
- Integration with other systems
- Deployment in any environment
- Commercial or production utilization requires express written permission from the IP Owner.

Intellectual Property Reservation
All rights not expressly granted herein are reserved by the IP Owner. For usage permissions, contact: legal@inferencelabs.com

Disclaimer
THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. THE IP OWNER SHALL NOT BE LIABLE FOR ANY DAMAGES ARISING FROM ACCESS OR DISTRIBUTION.

License Propagation
Any distribution of this software or derivatives must be under this same license agreement.

================================================
FILE: README.md
================================================
# DSperse: Community Edition

[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?style=flat-square&logo=github)](https://github.com/inference-labs-inc/dsperse)
[![Discord](https://img.shields.io/badge/Discord-Join%20Community-7289DA?style=flat-square&logo=discord)](https://discord.gg/GBxBCWJs)
[![Telegram](https://img.shields.io/badge/Telegram-Join%20Channel-0088cc?style=flat-square&logo=telegram)](https://t.me/inference_labs)
[![Twitter](https://img.shields.io/badge/Twitter-Follow%20Us-1DA1F2?style=flat-square&logo=twitter)](https://x.com/inference_labs)
[![Website](https://img.shields.io/badge/Website-Visit%20Us-ff7139?style=flat-square&logo=firefox-browser)](https://inferencelabs.com)
[![Whitepaper](https://img.shields.io/badge/Whitepaper-Read-lightgrey?style=flat-square&logo=read-the-docs)](http://arxiv.org/abs/2508.06972)

DSperse is a proving-system-agnostic intelligent slicer for verifiable AI. It decomposes ONNX neural network models into circuit-compatible segments and orchestrates compilation, inference, proving, and verification across pluggable ZK backends.

## Features

- **Model Slicing**: Split neural network models into individual layers or custom segments
- **ONNX Support**: Slice and orchestrate ONNX models
- **Layered Inference**: Run inference on sliced models, chaining the output of each segment
- **Zero-Knowledge Proofs**: Generate and verify proofs for model execution via JSTprove
- **Tiling and Channel Splitting**: Automatically decompose large convolutions for circuit-compatible execution
- **Proof System Agnostic**: Pluggable backend architecture supporting Expander and Remainder proof systems

## Documentation

- [Overview](docs/overview.md): High-level overview of the project, its goals, and features
- [JSTprove Backend](docs/JSTPROVE_BACKEND.md): JSTprove integration and usage

## Installation

### From PyPI (includes CLI)

```bash
pip install dsperse
```

This installs both the `dsperse` CLI command and the Python library bindings. No additional dependencies required — everything is compiled into a single native extension.

### From source (Rust binary)

```bash
cargo install --path crates/dsperse
```

### As a Rust library

```toml
[dependencies]
dsperse = { git = "https://github.com/inference-labs-inc/dsperse.git" }
```

## CLI Usage

DSperse provides six subcommands that form a complete pipeline:

| Command | Description |
|---------|-------------|
| `slice` | Split an ONNX model into segments |
| `compile` | Compile slices into ZK circuits |
| `run` | Execute chained inference across slices (`--weights` to inject consumer ONNX) |
| `prove` | Generate ZK proofs for a completed run |
| `verify` | Verify ZK proofs |
| `full-run` | Execute compile, run, prove, verify in sequence (supports `--weights`) |

### Quickstart

```bash
dsperse slice --model-dir models/net
dsperse compile --model-dir models/net --parallel 4
dsperse run --model-dir models/net --input-file models/net/input.json
dsperse prove --model-dir models/net --run-dir models/net/run/run_*
dsperse verify --model-dir models/net --run-dir models/net/run/run_*
```

Or run the entire pipeline at once:

```bash
dsperse full-run --model-dir models/net --input-file models/net/input.json
```

To inject consumer weights from a fine-tuned ONNX model (same architecture, different weights):

```bash
dsperse run --model-dir models/net --input-file models/net/input.json --weights path/to/consumer.onnx
dsperse full-run --model-dir models/net --input-file models/net/input.json --weights path/to/consumer.onnx
```

## Python Library Usage

```python
import dsperse

metadata_json = dsperse.slice_model("models/net/model.onnx", output_dir="models/net/slices")
dsperse.compile_slices("models/net/slices", parallel=4)
run_json = dsperse.run_inference("models/net/slices", "models/net/input.json", "models/net/run")
proof_json = dsperse.prove_run("models/net/run", "models/net/slices")
verify_json = dsperse.verify_run("models/net/run", "models/net/slices")
```

To inject consumer weights at inference time, pass `weights_onnx` (path to a fine-tuned ONNX with the same architecture):

```python
run_json = dsperse.run_inference(
    "models/net/slices", "models/net/input.json", "models/net/run",
    weights_onnx="path/to/consumer.onnx",
)
```

`slice_model`, `run_inference`, `prove_run`, and `verify_run` return JSON strings parseable with `json.loads()`. `compile_slices` returns `None`.

## Project Structure

```text
crates/dsperse/
  src/
    cli/          CLI argument parsing and command dispatch
    slicer/       ONNX model analysis, slicing, autotiling, channel splitting
    pipeline/     Compilation, inference, proving, verification orchestration
    backend/      JSTprove backend integration
    schema/       Metadata and execution result types (serde)
    converter.rs  Prepares JSTprove artifacts from ONNX files
    utils/        I/O helpers and path resolution
  tests/          Unit and integration tests
python/           Thin Python wrapper for PyO3 bindings
```

## Contributing

Contributions are welcome. Please open issues and PRs on GitHub.

## License

See the [LICENSE](LICENSE) file for details.


================================================
FILE: crates/dsperse/Cargo.toml
================================================
[package]
name = "dsperse"
version = "0.0.0"
edition.workspace = true

[features]
default = []
python = ["dep:pyo3", "pyo3/extension-module"]

[dependencies]
serde.workspace = true
rmpv.workspace = true
rmp-serde.workspace = true
thiserror.workspace = true
clap.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true
rayon.workspace = true
ndarray.workspace = true
tract-onnx.workspace = true
uuid.workspace = true
sha2.workspace = true
tempfile.workspace = true
prost.workspace = true
pyo3 = { workspace = true, optional = true }
serde_json = "1"
zip = { version = "2", default-features = false, features = ["deflate"] }
walkdir = "2"
jstprove_circuits.workspace = true
jstprove_io.workspace = true
reqwest.workspace = true
tokio.workspace = true

[target.'cfg(unix)'.dependencies]
libc = "0.2"

[build-dependencies]
prost-build = "0.13"

[dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] }

[[bench]]
name = "serialization"
harness = false

[lib]
name = "dsperse"
crate-type = ["cdylib", "lib"]


================================================
FILE: crates/dsperse/benches/serialization.rs
================================================
use std::collections::HashMap;

use criterion::{Criterion, black_box, criterion_group, criterion_main};
use dsperse::schema::execution::{
    ExecutionChain, ExecutionInfo, ExecutionMethod, ExecutionNode, ExecutionResultEntry,
    RunMetadata, SliceResult, TileResult,
};
use dsperse::schema::metadata::{
    BackendKind, Compilation, Dependencies, ModelMetadata, RunSliceMetadata, SliceMetadata,
    SliceShapeWrapper, TensorShape,
};
use serde::{Deserialize, Serialize};

fn make_slice_metadata(index: usize) -> SliceMetadata {
    SliceMetadata {
        index,
        filename: format!("slice_{index}.onnx"),
        path: format!("/tmp/slices/slice_{index}/payload/slice_{index}.onnx"),
        relative_path: format!("slice_{index}/payload/slice_{index}.onnx"),
        shape: SliceShapeWrapper {
            tensor_shape: TensorShape {
                input: vec![vec![1, 3, 224, 224]],
                output: vec![vec![1, 64, 112, 112]],
            },
        },
        dependencies: Dependencies {
            input: vec![format!("input_{index}")],
            output: vec![format!("output_{index}")],
            filtered_inputs: vec![format!("input_{index}")],
        },
        tiling: None,
        channel_split: None,
        dim_split: None,
        compilation: Compilation::default(),
        slice_metadata: Some(format!("slice_{index}/metadata.msgpack")),
        slice_metadata_relative_path: Some(format!("slice_{index}/metadata.msgpack")),
    }
}

fn make_model_metadata(num_slices: usize) -> ModelMetadata {
    let slices: Vec<SliceMetadata> = (0..num_slices).map(make_slice_metadata).collect();
    let slice_points: Vec<usize> = (0..=num_slices).collect();
    ModelMetadata {
        original_model: "/tmp/model.onnx".into(),
        model_type: "ONNX".into(),
        input_shape: vec![vec![1, 3, 224, 224]],
        output_shapes: vec![vec![1, 1000]],
        output_names: vec!["output".into()],
        slice_points,
        slices,
        dsperse_version: Some("0.0.0".into()),
        dsperse_rev: Some("abc1234".into()),
        jstprove_version: Some("0.1.0".into()),
        jstprove_rev: Some("def5678".into()),
        traced_shapes: None,
        traced_types: None,
        original_model_path: None,
        folded_constant_names: vec![],
    }
}

fn make_run_metadata(num_slices: usize) -> RunMetadata {
    let mut slices = HashMap::new();
    let mut nodes = HashMap::new();
    let mut execution_results = Vec::new();

    for i in 0..num_slices {
        let slice_id = format!("slice_{i}");
        slices.insert(
            slice_id.clone(),
            RunSliceMetadata {
                path: format!("slice_{i}/payload/slice_{i}.onnx"),
                input_shape: vec![vec![1, 3, 224, 224]],
                output_shape: vec![vec![1, 64, 112, 112]],
                dependencies: Dependencies {
                    input: vec![format!("input_{i}")],
                    output: vec![format!("output_{i}")],
                    filtered_inputs: vec![format!("input_{i}")],
                },
                tiling: None,
                channel_split: None,
                dim_split: None,
                backend: BackendKind::Jstprove,
                jstprove_circuit_path: Some(format!("slice_{i}/jstprove/circuit.bin")),
                jstprove_settings_path: None,
            },
        );
        nodes.insert(
            slice_id.clone(),
            ExecutionNode {
                slice_id: slice_id.clone(),
                primary: Some("jstprove_gen_witness".into()),
                fallbacks: vec!["onnx_only".into()],
                use_circuit: true,
                next: if i + 1 < num_slices {
                    Some(format!("slice_{}", i + 1))
                } else {
                    None
                },
                circuit_path: Some(format!("slice_{i}/jstprove/circuit.bin")),
                onnx_path: Some(format!("slice_{i}/payload/slice_{i}.onnx")),
                backend: BackendKind::Jstprove,
            },
        );
        execution_results.push(ExecutionResultEntry {
            slice_id: slice_id.clone(),
            witness_execution: Some(ExecutionInfo {
                method: ExecutionMethod::JstproveGenWitness,
                success: true,
                error: None,
                witness_file: Some(format!("runs/run_0/{slice_id}/witness.bin")),
                tile_exec_infos: vec![TileResult {
                    tile_idx: 0,
                    success: true,
                    error: None,
                    method: Some(ExecutionMethod::JstproveGenWitness),
                    time_sec: 1.23,
                    proof_path: None,
                }],
            }),
            proof_execution: Some(SliceResult {
                slice_id: slice_id.clone(),
                success: true,
                method: Some(ExecutionMethod::JstproveProve),
                error: None,
                proof_path: Some(format!("runs/run_0/{slice_id}/proof.bin")),
                time_sec: 45.67,
                tiles: Vec::new(),
            }),
            verification_execution: None,
        });
    }

    RunMetadata {
        slices,
        execution_chain: ExecutionChain {
            head: Some("slice_0".into()),
            nodes,
            fallback_map: HashMap::new(),
            execution_results,
            jstprove_proved_slices: num_slices,
            jstprove_verified_slices: 0,
        },
        packaging_type: Some("dsperse".into()),
        source_path: Some("/tmp/model.onnx".into()),
        run_directory: Some("/tmp/runs/run_0".into()),
        model_path: Some("/tmp/model.onnx".into()),
    }
}

fn bench_roundtrip<T: Serialize + for<'de> Deserialize<'de>>(
    c: &mut Criterion,
    name: &str,
    value: &T,
) {
    let json_bytes = serde_json::to_vec(value).unwrap();
    let msgpack_bytes = rmp_serde::to_vec_named(value).unwrap();

    let group_name = format!(
        "{name} (json={}, msgpack={})",
        json_bytes.len(),
        msgpack_bytes.len()
    );
    let mut group = c.benchmark_group(&group_name);

    group.bench_function("json_serialize", |b| {
        b.iter(|| serde_json::to_vec(black_box(value)).unwrap());
    });
    group.bench_function("msgpack_serialize", |b| {
        b.iter(|| rmp_serde::to_vec_named(black_box(value)).unwrap());
    });
    group.bench_function("json_deserialize", |b| {
        b.iter(|| serde_json::from_slice::<T>(black_box(&json_bytes)).unwrap());
    });
    group.bench_function("msgpack_deserialize", |b| {
        b.iter(|| rmp_serde::from_slice::<T>(black_box(&msgpack_bytes)).unwrap());
    });

    group.finish();
}

fn serialization_benchmarks(c: &mut Criterion) {
    let small_model = make_model_metadata(4);
    let large_model = make_model_metadata(64);
    let small_run = make_run_metadata(4);
    let large_run = make_run_metadata(64);

    bench_roundtrip(c, "ModelMetadata_4slices", &small_model);
    bench_roundtrip(c, "ModelMetadata_64slices", &large_model);
    bench_roundtrip(c, "RunMetadata_4slices", &small_run);
    bench_roundtrip(c, "RunMetadata_64slices", &large_run);
}

criterion_group!(benches, serialization_benchmarks);
criterion_main!(benches);


================================================
FILE: crates/dsperse/build.rs
================================================
fn main() {
    prost_build::Config::new()
        .compile_protos(&["proto/onnx.proto"], &["proto/"])
        .expect("Failed to compile ONNX proto");

    let git_rev = std::process::Command::new("git")
        .args(["rev-parse", "--short", "HEAD"])
        .output()
        .ok()
        .filter(|o| o.status.success())
        .map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string());

    if let Some(ref rev) = git_rev {
        println!("cargo:rustc-env=DSPERSE_GIT_REV={rev}");
    }

    let pkg_version = std::env::var("CARGO_PKG_VERSION").unwrap();
    let display_version = match (pkg_version.as_str(), &git_rev) {
        ("0.0.0", Some(rev)) => format!("dev-{rev}"),
        ("0.0.0", None) => "dev".to_string(),
        (v, Some(rev)) => format!("{v}+{rev}"),
        (v, None) => v.to_string(),
    };
    println!("cargo:rustc-env=DSPERSE_DISPLAY_VERSION={display_version}");
    if let Some(output) = std::process::Command::new("git")
        .args(["rev-parse", "--git-path", "HEAD"])
        .output()
        .ok()
        .filter(|o| o.status.success())
    {
        let head_path = String::from_utf8_lossy(&output.stdout).trim().to_string();
        println!("cargo:rerun-if-changed={head_path}");
    }

    if let Some(output) = std::process::Command::new("git")
        .args(["symbolic-ref", "-q", "HEAD"])
        .output()
        .ok()
        .filter(|o| o.status.success())
    {
        let head_ref = String::from_utf8_lossy(&output.stdout).trim().to_string();
        if let Some(output) = std::process::Command::new("git")
            .args(["rev-parse", "--git-path", &head_ref])
            .output()
            .ok()
            .filter(|o| o.status.success())
        {
            let ref_path = String::from_utf8_lossy(&output.stdout).trim().to_string();
            println!("cargo:rerun-if-changed={ref_path}");
        }
    }
}


================================================
FILE: crates/dsperse/proto/onnx.proto
================================================
//
// WARNING: This file is automatically generated!  Please edit onnx.in.proto.
//


// SPDX-License-Identifier: Apache-2.0


syntax = "proto3";

package onnx;

// Overview
//
// ONNX is an open specification that is comprised of the following components:
//
// 1)  A definition of an extensible computation graph model.
// 2)  Definitions of standard data types.
// 3)  Definitions of built-in operators.
//
// This document describes the syntax of models and their computation graphs,
// as well as the standard data types. Together, they are referred to as the ONNX
// Intermediate Representation, or 'IR' for short.
//
// The normative semantic specification of the ONNX IR is found in docs/IR.md.
// Definitions of the built-in neural network operators may be found in docs/Operators.md.

// Notes
//
// Protobuf compatibility
//
// To simplify framework compatibility, ONNX is defined using the subset of protobuf
// that is compatible with both protobuf v2 and v3. This means that we do not use any
// protobuf features that are only available in one of the two versions.
//
// Here are the most notable contortions we have to carry out to work around
// these limitations:
//
//   - No 'map' (added protobuf 3.0). We instead represent mappings as lists
//     of key-value pairs, where order does not matter and duplicates
//     are not allowed.


// Versioning
//
// ONNX versioning is specified in docs/IR.md and elaborated on in docs/Versioning.md
//
// To be compatible with both proto2 and proto3, we will use a version number
// that is not defined by the default value but an explicit enum number.
enum Version {
  // proto3 requires the first enum value to be zero.
  // We add this just to appease the compiler.
  _START_VERSION = 0;
  // The version field is always serialized and we will use it to store the
  // version that the  graph is generated from. This helps us set up version
  // control.
  // For the IR, we are using simple numbers starting with 0x00000001,
  // which was the version we published on Oct 10, 2017.
  IR_VERSION_2017_10_10 = 0x0000000000000001;

  // IR_VERSION 2 published on Oct 30, 2017
  // - Added type discriminator to AttributeProto to support proto3 users
  IR_VERSION_2017_10_30 = 0x0000000000000002;

  // IR VERSION 3 published on Nov 3, 2017
  // - For operator versioning:
  //    - Added new message OperatorSetIdProto
  //    - Added opset_import in ModelProto
  // - For vendor extensions, added domain in NodeProto
  IR_VERSION_2017_11_3 = 0x0000000000000003;

  // IR VERSION 4 published on Jan 22, 2019
  // - Relax constraint that initializers should be a subset of graph inputs
  // - Add type BFLOAT16
  IR_VERSION_2019_1_22 = 0x0000000000000004;

  // IR VERSION 5 published on March 18, 2019
  // - Add message TensorAnnotation.
  // - Add quantization annotation in GraphProto to map tensor with its scale and zero point quantization parameters.
  IR_VERSION_2019_3_18 = 0x0000000000000005;

  // IR VERSION 6 published on Sep 19, 2019
  // - Add support for sparse tensor constants stored in model.
  //   - Add message SparseTensorProto
  //   - Add sparse initializers
  IR_VERSION_2019_9_19 = 0x0000000000000006;

  // IR VERSION 7 published on May 8, 2020
  // - Add support to allow function body graph to rely on multiple external opreator sets.
  // - Add a list to promote inference graph's initializers to global and
  //   mutable variables. Global variables are visible in all graphs of the
  //   stored models.
  // - Add message TrainingInfoProto to store initialization
  //   method and training algorithm. The execution of TrainingInfoProto
  //   can modify the values of mutable variables.
  // - Implicitly add inference graph into each TrainingInfoProto's algorithm.
  IR_VERSION_2020_5_8 = 0x0000000000000007;

  // IR VERSION 8 published on July 30, 2021
  // Introduce TypeProto.SparseTensor
  // Introduce TypeProto.Optional
  // Added a list of FunctionProtos local to the model
  // Deprecated since_version and operator status from FunctionProto
  IR_VERSION_2021_7_30 = 0x0000000000000008;

  // IR VERSION 9 published on May 5, 2023
  // Added AttributeProto to FunctionProto so that default attribute values can be set.
  // Added FLOAT8E4M3FN, FLOAT8E4M3FNUZ, FLOAT8E5M2, FLOAT8E5M2FNUZ.
  IR_VERSION_2023_5_5 = 0x0000000000000009;

  // IR VERSION 10 published on March 25, 2024
  // Added UINT4, INT4.
  IR_VERSION_2024_3_25 = 0x000000000000000A;

  // IR VERSION 11 published on TBD
  // Added FLOAT4E2M1, multi-device protobuf classes.
  IR_VERSION = 0x000000000000000B;
}

// Attributes
//
// A named attribute containing either singular float, integer, string, graph,
// and tensor values, or repeated float, integer, string, graph, and tensor values.
// An AttributeProto MUST contain the name field, and *only one* of the
// following content fields, effectively enforcing a C/C++ union equivalent.
message AttributeProto {
  reserved 12, 16 to 19;
  reserved "v";

  // Note: this enum is structurally identical to the OpSchema::AttrType
  // enum defined in schema.h.  If you rev one, you likely need to rev the other.
  enum AttributeType {
    UNDEFINED = 0;
    FLOAT = 1;
    INT = 2;
    STRING = 3;
    TENSOR = 4;
    GRAPH = 5;
    SPARSE_TENSOR = 11;
    TYPE_PROTO = 13;

    FLOATS = 6;
    INTS = 7;
    STRINGS = 8;
    TENSORS = 9;
    GRAPHS = 10;
    SPARSE_TENSORS = 12;
    TYPE_PROTOS = 14;
  }

  // The name field MUST be present for this version of the IR.
  string name = 1;           // namespace Attribute

  // if ref_attr_name is not empty, ref_attr_name is the attribute name in parent function.
  // In this case, this AttributeProto does not contain data, and it's a reference of attribute
  // in parent scope.
  // NOTE: This should ONLY be used in function (sub-graph). It's invalid to be used in main graph.
  string ref_attr_name = 21;

  // A human-readable documentation for this attribute. Markdown is allowed.
  string doc_string = 13;

  // The type field MUST be present for this version of the IR.
  // For 0.0.1 versions of the IR, this field was not defined, and
  // implementations needed to use has_field heuristics to determine
  // which value field was in use.  For IR_VERSION 0.0.2 or later, this
  // field MUST be set and match the f|i|s|t|... field in use.  This
  // change was made to accommodate proto3 implementations.
  AttributeType type = 20;   // discriminator that indicates which field below is in use

  // Exactly ONE of the following fields must be present for this version of the IR
  float f = 2;               // float
  int64 i = 3;               // int
  bytes s = 4;               // UTF-8 string
  TensorProto t = 5;         // tensor value
  GraphProto g = 6;          // graph
  SparseTensorProto sparse_tensor = 22;  // sparse tensor value
  // Do not use field below, it's deprecated.
  // optional ValueProto v = 12;         // value - subsumes everything but graph
  TypeProto tp = 14;          // type proto

  repeated float floats = 7;          // list of floats
  repeated int64 ints = 8;            // list of ints
  repeated bytes strings = 9;         // list of UTF-8 strings
  repeated TensorProto tensors = 10;  // list of tensors
  repeated GraphProto graphs = 11;    // list of graph
  repeated SparseTensorProto sparse_tensors = 23; // list of sparse tensors
  repeated TypeProto type_protos = 15;// list of type protos
}

// Defines information on value, including the name, the type, and
// the shape of the value.
message ValueInfoProto {
  // This field MUST be present in this version of the IR.
  string name = 1;     // namespace Value
  // This field MUST be present in this version of the IR for
  // inputs and outputs of the top-level graph.
  TypeProto type = 2;
  // A human-readable documentation for this value. Markdown is allowed.
  string doc_string = 3;
  // Named metadata values; keys should be distinct.
  repeated StringStringEntryProto metadata_props = 4;
}

// Nodes
//
// Computation graphs are made up of a DAG of nodes, which represent what is
// commonly called a "layer" or "pipeline stage" in machine learning frameworks.
//
// For example, it can be a node of type "Conv" that takes in an image, a filter
// tensor and a bias tensor, and produces the convolved output.
message NodeProto {
  repeated string input = 1;    // namespace Value
  repeated string output = 2;   // namespace Value

  // An optional identifier for this node in a graph.
  // This field MAY be absent in this version of the IR.
  string name = 3;     // namespace Node

  // The symbolic identifier of the Operator to execute.
  string op_type = 4;  // namespace Operator
  // The domain of the OperatorSet that specifies the operator named by op_type.
  string domain = 7;   // namespace Domain
  // Overload identifier, used only to map this to a model-local function.
  string overload = 8;

  // Additional named attributes.
  repeated AttributeProto attribute = 5;

  // A human-readable documentation for this node. Markdown is allowed.
  string doc_string = 6;

  // Named metadata values; keys should be distinct.
  repeated StringStringEntryProto metadata_props = 9;

  // Configuration of multi-device annotations.
  repeated NodeDeviceConfigurationProto device_configurations = 10;
}

// IntIntListEntryProto follows the pattern for cross-proto-version maps.
// See https://developers.google.com/protocol-buffers/docs/proto3#maps
message IntIntListEntryProto {
  int64 key = 1;
  repeated int64 value = 2;
};

// Multi-device configuration proto for NodeProto.
message NodeDeviceConfigurationProto {
    // This field MUST be present for this version of the IR.
    // ID of the configuration. MUST match the name of a DeviceConfigurationProto.
    string configuration_id = 1;
    // Sharding spec for the node.
    repeated ShardingSpecProto sharding_spec = 2;
    // Pipeline stage of this node.
    int32 pipeline_stage = 3;
}

// ShardingSpecProto: This describes the sharding spec for a specific
// input or output tensor of a node.
message ShardingSpecProto {
  // This field MUST be present for this version of the IR.
  // Identifies the input or output of the node that is being sharded.
  // Required to match a name specified in the node's input or output list of ValueInfoProtos.
  // It is called `logical tensor` in subsequent descriptions.
  string tensor_name = 1;

  // The following is the list of devices across which the logical
  // tensor is sharded or replicated.
  repeated int64 device = 2;

  // Each element v in above field devices may represent either a
  // device or a set of devices (when we want the same shard/tensor
  // to be replicated across a subset of devices), as indicated by
  // the following optional map. If the map contains an entry for v,
  // then v represents a device group, and the map indicates the set
  // of devices in that group.
  repeated IntIntListEntryProto index_to_device_group_map = 3;

  // The following is the sharded-shape of the tensor, consisting of
  // the sharding-spec for each axis of the tensor.
  repeated ShardedDimProto sharded_dim = 4;
}

// ShardedDimProto: This describes the sharding spec for a single
// axis of a sharded tensor.
message ShardedDimProto {
  // This field MUST be present for this version of the IR.
  // The axis this sharding corresponds to. Must be in the range of
  // [-r, r - 1], where r is the rank of the tensor. Negative axis values means
  // counting from the back.
  int64 axis = 1;

  // Describes how the tensor on the provided axis is sharded.
  // The common-case is described by a single instance of SimpleShardedDimProto.
  // Multiple instances can be used to handle cases where a sharded
  // tensor is reshaped, fusing multiple axes into one.
  repeated SimpleShardedDimProto simple_sharding = 2;
}

// SimpleShardedDimProto: Indicates that N blocks are divided into M shards.
// N is allowed to be symbolic where M is required to be a constant.
message SimpleShardedDimProto {
    // Dimension value to be sharded.
    oneof dim {
        int64 dim_value = 1;
        string dim_param = 2;
    }

    // This field MUST be present for this version of the IR.
    // Number of shards to split dim into.
    int64 num_shards = 3;
}

// Training information
// TrainingInfoProto stores information for training a model.
// In particular, this defines two functionalities: an initialization-step
// and a training-algorithm-step. Initialization resets the model
// back to its original state as if no training has been performed.
// Training algorithm improves the model based on input data.
//
// The semantics of the initialization-step is that the initializers
// in ModelProto.graph and in TrainingInfoProto.algorithm are first
// initialized as specified by the initializers in the graph, and then
// updated by the "initialization_binding" in every instance in
// ModelProto.training_info.
//
// The field "algorithm" defines a computation graph which represents a
// training algorithm's step. After the execution of a
// TrainingInfoProto.algorithm, the initializers specified by "update_binding"
// may be immediately updated. If the targeted training algorithm contains
// consecutive update steps (such as block coordinate descent methods),
// the user needs to create a TrainingInfoProto for each step.
message TrainingInfoProto {
  // This field describes a graph to compute the initial tensors
  // upon starting the training process. Initialization graph has no input
  // and can have multiple outputs. Usually, trainable tensors in neural
  // networks are randomly initialized. To achieve that, for each tensor,
  // the user can put a random number operator such as RandomNormal or
  // RandomUniform in TrainingInfoProto.initialization.node and assign its
  // random output to the specific tensor using "initialization_binding".
  // This graph can also set the initializers in "algorithm" in the same
  // TrainingInfoProto; a use case is resetting the number of training
  // iteration to zero.
  //
  // By default, this field is an empty graph and its evaluation does not
  // produce any output. Thus, no initializer would be changed by default.
  GraphProto initialization = 1;

  // This field represents a training algorithm step. Given required inputs,
  // it computes outputs to update initializers in its own or inference graph's
  // initializer lists. In general, this field contains loss node, gradient node,
  // optimizer node, increment of iteration count.
  //
  // An execution of the training algorithm step is performed by executing the
  // graph obtained by combining the inference graph (namely "ModelProto.graph")
  // and the "algorithm" graph. That is, the actual
  // input/initializer/output/node/value_info/sparse_initializer list of
  // the training graph is the concatenation of
  // "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
  // and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
  // in that order. This combined graph must satisfy the normal ONNX conditions.
  // Now, let's provide a visualization of graph combination for clarity.
  // Let the inference graph (i.e., "ModelProto.graph") be
  //    tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
  // and the "algorithm" graph be
  //    tensor_d -> Add -> tensor_e
  // The combination process results
  //    tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
  //
  // Notice that an input of a node in the "algorithm" graph may reference the
  // output of a node in the inference graph (but not the other way round). Also, inference
  // node cannot reference inputs of "algorithm". With these restrictions, inference graph
  // can always be run independently without training information.
  //
  // By default, this field is an empty graph and its evaluation does not
  // produce any output. Evaluating the default training step never
  // update any initializers.
  GraphProto algorithm = 2;

  // This field specifies the bindings from the outputs of "initialization" to
  // some initializers in "ModelProto.graph.initializer" and
  // the "algorithm.initializer" in the same TrainingInfoProto.
  // See "update_binding" below for details.
  //
  // By default, this field is empty and no initializer would be changed
  // by the execution of "initialization".
  repeated StringStringEntryProto initialization_binding = 3;

  // Gradient-based training is usually an iterative procedure. In one gradient
  // descent iteration, we apply
  //
  // x = x - r * g
  //
  // where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
  // gradient of "x" with respect to a chosen loss. To avoid adding assignments
  // into the training graph, we split the update equation into
  //
  // y = x - r * g
  // x = y
  //
  // The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
  // tell that "y" should be assigned to "x", the field "update_binding" may
  // contain a key-value pair of strings, "x" (key of StringStringEntryProto)
  // and "y" (value of StringStringEntryProto).
  // For a neural network with multiple trainable (mutable) tensors, there can
  // be multiple key-value pairs in "update_binding".
  //
  // The initializers appears as keys in "update_binding" are considered
  // mutable variables. This implies some behaviors
  // as described below.
  //
  //  1. We have only unique keys in all "update_binding"s so that two
  //     variables may not have the same name. This ensures that one
  //     variable is assigned up to once.
  //  2. The keys must appear in names of "ModelProto.graph.initializer" or
  //     "TrainingInfoProto.algorithm.initializer".
  //  3. The values must be output names of "algorithm" or "ModelProto.graph.output".
  //  4. Mutable variables are initialized to the value specified by the
  //     corresponding initializer, and then potentially updated by
  //     "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
  //
  // This field usually contains names of trainable tensors
  // (in ModelProto.graph), optimizer states such as momentums in advanced
  // stochastic gradient methods (in TrainingInfoProto.graph),
  // and number of training iterations (in TrainingInfoProto.graph).
  //
  // By default, this field is empty and no initializer would be changed
  // by the execution of "algorithm".
  repeated StringStringEntryProto update_binding = 4;
}

// Models
//
// ModelProto is a top-level file/container format for bundling a ML model and
// associating its computation graph with metadata.
//
// The semantics of the model are described by the associated GraphProto's.
message ModelProto {
  // The version of the IR this model targets. See Version enum above.
  // This field MUST be present.
  int64 ir_version = 1;

  // The OperatorSets this model relies on.
  // All ModelProtos MUST have at least one entry that
  // specifies which version of the ONNX OperatorSet is
  // being imported.
  //
  // All nodes in the ModelProto's graph will bind against the operator
  // with the same-domain/same-op_type operator with the HIGHEST version
  // in the referenced operator sets.
  repeated OperatorSetIdProto opset_import = 8;

  // The name of the framework or tool used to generate this model.
  // This field SHOULD be present to indicate which implementation/tool/framework
  // emitted the model.
  string producer_name = 2;

  // The version of the framework or tool used to generate this model.
  // This field SHOULD be present to indicate which implementation/tool/framework
  // emitted the model.
  string producer_version = 3;

  // Domain name of the model.
  // We use reverse domain names as name space indicators. For example:
  // `com.facebook.fair` or `com.microsoft.cognitiveservices`
  //
  // Together with `model_version` and GraphProto.name, this forms the unique identity of
  // the graph.
  string domain = 4;

  // The version of the graph encoded. See Version enum below.
  int64 model_version = 5;

  // A human-readable documentation for this model. Markdown is allowed.
  string doc_string = 6;

  // The parameterized graph that is evaluated to execute the model.
  GraphProto graph = 7;

  // Named metadata values; keys should be distinct.
  repeated StringStringEntryProto metadata_props = 14;

  // Training-specific information. Sequentially executing all stored
  // `TrainingInfoProto.algorithm`s and assigning their outputs following
  // the corresponding `TrainingInfoProto.update_binding`s is one training
  // iteration. Similarly, to initialize the model
  // (as if training hasn't happened), the user should sequentially execute
  // all stored `TrainingInfoProto.initialization`s and assigns their outputs
  // using `TrainingInfoProto.initialization_binding`s.
  //
  // If this field is empty, the training behavior of the model is undefined.
  repeated TrainingInfoProto training_info = 20;

  // A list of function protos local to the model.
  //
  // The (domain, name, overload) tuple must be unique across the function protos in this list.
  // In case of any conflicts the behavior (whether the model local functions are given higher priority,
  // or standard operator sets are given higher priotity or this is treated as error) is defined by
  // the runtimes.
  //
  // The operator sets imported by FunctionProto should be compatible with the ones
  // imported by ModelProto and other model local FunctionProtos.
  // Example, if same operator set say 'A' is imported by a FunctionProto and ModelProto
  // or by 2 FunctionProtos then versions for the operator set may be different but,
  // the operator schema returned for op_type, domain, version combination
  // for both the versions should be same for every node in the function body.
  //
  // One FunctionProto can reference other FunctionProto in the model, however, recursive reference
  // is not allowed.
  repeated FunctionProto functions = 25;

  // Describes different target configurations for a multi-device use case.
  // A model MAY describe multiple multi-device configurations for execution.
  repeated DeviceConfigurationProto configuration = 26;
};

// DeviceConfigurationProto describes a multi-device configuration for a model.
message DeviceConfigurationProto {
    // This field MUST be present for this version of the IR.
    // Name of the configuration.
    string name = 1;
    // This field MUST be present for this version of the IR.
    // Number of devices inside this configuration.
    int32 num_devices = 2;
    // Optional names of the devices. MUST be length of num_devices if provided.
    repeated string device = 3;
}

// StringStringEntryProto follows the pattern for cross-proto-version maps.
// See https://developers.google.com/protocol-buffers/docs/proto3#maps
message StringStringEntryProto {
  string key = 1;
  string value = 2;
};

message TensorAnnotation {
  string tensor_name = 1;
  // <key, value> pairs to annotate tensor specified by <tensor_name> above.
  // The keys used in the mapping below must be pre-defined in ONNX spec.
  // For example, for 8-bit linear quantization case, 'SCALE_TENSOR', 'ZERO_POINT_TENSOR' will be pre-defined as
  // quantization parameter keys.
  repeated StringStringEntryProto quant_parameter_tensor_names = 2;
}



// Graphs
//
// A graph defines the computational logic of a model and is comprised of a parameterized
// list of nodes that form a directed acyclic graph based on their inputs and outputs.
// This is the equivalent of the "network" or "graph" in many deep learning
// frameworks.
message GraphProto {
  // The nodes in the graph, sorted topologically.
  repeated NodeProto node = 1;

  // The name of the graph.
  string name = 2;   // namespace Graph

  // A list of named tensor values, used to specify constant inputs of the graph.
  // Each initializer (both TensorProto as well SparseTensorProto) MUST have a name.
  // The name MUST be unique across both initializer and sparse_initializer,
  // but the name MAY also appear in the input list.
  repeated TensorProto initializer = 5;

  // Initializers (see above) stored in sparse format.
  repeated SparseTensorProto sparse_initializer = 15;

  // A human-readable documentation for this graph. Markdown is allowed.
  string doc_string = 10;

  // The inputs and outputs of the graph.
  repeated ValueInfoProto input = 11;
  repeated ValueInfoProto output = 12;

  // Information for the values in the graph. The ValueInfoProto.name's
  // must be distinct. It is optional for a value to appear in value_info list.
  repeated ValueInfoProto value_info = 13;

  // This field carries information to indicate the mapping among a tensor and its
  // quantization parameter tensors. For example:
  // For tensor 'a', it may have {'SCALE_TENSOR', 'a_scale'} and {'ZERO_POINT_TENSOR', 'a_zero_point'} annotated,
  // which means, tensor 'a_scale' and tensor 'a_zero_point' are scale and zero point of tensor 'a' in the model.
  repeated TensorAnnotation quantization_annotation = 14;

  // Named metadata values; keys should be distinct.
  repeated StringStringEntryProto metadata_props = 16;

  reserved 3, 4, 6 to 9;
  reserved "ir_version", "producer_version", "producer_tag", "domain";
}

// Tensors
//
// A serialized tensor value.
message TensorProto {
  enum DataType {
    UNDEFINED = 0;
    // Basic types.
    FLOAT = 1;   // float
    UINT8 = 2;   // uint8_t
    INT8 = 3;    // int8_t
    UINT16 = 4;  // uint16_t
    INT16 = 5;   // int16_t
    INT32 = 6;   // int32_t
    INT64 = 7;   // int64_t
    STRING = 8;  // string
    BOOL = 9;    // bool

    // IEEE754 half-precision floating-point format (16 bits wide).
    // This format has 1 sign bit, 5 exponent bits, and 10 mantissa bits.
    FLOAT16 = 10;

    DOUBLE = 11;
    UINT32 = 12;
    UINT64 = 13;
    COMPLEX64 = 14;     // complex with float32 real and imaginary components
    COMPLEX128 = 15;    // complex with float64 real and imaginary components

    // Non-IEEE floating-point format based on IEEE754 single-precision
    // floating-point number truncated to 16 bits.
    // This format has 1 sign bit, 8 exponent bits, and 7 mantissa bits.
    BFLOAT16 = 16;

    // Non-IEEE floating-point format based on papers
    // FP8 Formats for Deep Learning, https://arxiv.org/abs/2209.05433,
    // 8-bit Numerical Formats For Deep Neural Networks, https://arxiv.org/pdf/2206.02915.pdf.
    // Operators supported FP8 are Cast, CastLike, QuantizeLinear, DequantizeLinear.
    // The computation usually happens inside a block quantize / dequantize
    // fused by the runtime.
    FLOAT8E4M3FN = 17;    // float 8, mostly used for coefficients, supports nan, not inf
    FLOAT8E4M3FNUZ = 18;  // float 8, mostly used for coefficients, supports nan, not inf, no negative zero
    FLOAT8E5M2 = 19;      // follows IEEE 754, supports nan, inf, mostly used for gradients
    FLOAT8E5M2FNUZ = 20;  // follows IEEE 754, supports nan, not inf, mostly used for gradients, no negative zero

    // 4-bit integer data types
    UINT4 = 21;  // Unsigned integer in range [0, 15]
    INT4 = 22;   // Signed integer in range [-8, 7], using two's-complement representation

    // 4-bit floating point data types
    FLOAT4E2M1 = 23;

    // Future extensions go here.
  }

  // The shape of the tensor.
  repeated int64 dims = 1;

  // The data type of the tensor.
  // This field MUST have a valid TensorProto.DataType value
  int32 data_type = 2;

  // For very large tensors, we may want to store them in chunks, in which
  // case the following fields will specify the segment that is stored in
  // the current TensorProto.
  message Segment {
    int64 begin = 1;
    int64 end = 2;
  }
  Segment segment = 3;

  // Tensor content must be organized in row-major order.
  //
  // Depending on the data_type field, exactly one of the fields below with
  // name ending in _data is used to store the elements of the tensor.

  // For float and complex64 values
  // Complex64 tensors are encoded as a single array of floats,
  // with the real components appearing in odd numbered positions,
  // and the corresponding imaginary component appearing in the
  // subsequent even numbered position. (e.g., [1.0 + 2.0i, 3.0 + 4.0i]
  // is encoded as [1.0, 2.0 ,3.0 ,4.0]
  // When this field is present, the data_type field MUST be FLOAT or COMPLEX64.
  repeated float float_data = 4 [packed = true];

  // For int32, uint8, int8, uint16, int16, uint4, int4, bool, (b)float16, float8, and float4:
  // - (b)float16 and float8 values MUST be converted bit-wise into an unsigned integer
  //   representation before being written to the buffer.
  // - Each pair of uint4, int4, and float4 values MUST be packed as two 4-bit elements into a single byte.
  //   The first element is stored in the 4 least significant bits (LSB),
  //   and the second element is stored in the 4 most significant bits (MSB).
  //
  // Consequently:
  // - For data types with a bit-width of 8 or greater, each `int32_data` stores one element.
  // - For 4-bit data types, each `int32_data` stores two elements.
  //
  // When this field is present, the data_type field MUST be
  // INT32, INT16, INT8, INT4, UINT16, UINT8, UINT4, BOOL, FLOAT16, BFLOAT16, FLOAT8E4M3FN, FLOAT8E4M3FNUZ, FLOAT8E5M2, FLOAT8E5M2FNUZ, FLOAT4E2M1
  repeated int32 int32_data = 5 [packed = true];

  // For strings.
  // Each element of string_data is a UTF-8 encoded Unicode
  // string. No trailing null, no leading BOM. The protobuf "string"
  // scalar type is not used to match ML community conventions.
  // When this field is present, the data_type field MUST be STRING
  repeated bytes string_data = 6;

  // For int64.
  // When this field is present, the data_type field MUST be INT64
  repeated int64 int64_data = 7 [packed = true];

  // Optionally, a name for the tensor.
  string name = 8; // namespace Value

  // A human-readable documentation for this tensor. Markdown is allowed.
  string doc_string = 12;

  // Serializations can either use one of the fields above, or use this
  // raw bytes field. The only exception is the string case, where one is
  // required to store the content in the repeated bytes string_data field.
  //
  // When this raw_data field is used to store tensor value, elements MUST
  // be stored in as fixed-width, little-endian order.
  // Floating-point data types MUST be stored in IEEE 754 format.
  // Complex64 elements must be written as two consecutive FLOAT values, real component first.
  // Complex128 elements must be written as two consecutive DOUBLE values, real component first.
  // Boolean type MUST be written one byte per tensor element (00000001 for true, 00000000 for false).
  // uint4 and int4 values must be packed to 4bitx2, the first element is stored in the 4 LSB and the second element is stored in the 4 MSB.
  //
  // Note: the advantage of specific field rather than the raw_data field is
  // that in some cases (e.g. int data), protobuf does a better packing via
  // variable length storage, and may lead to smaller binary footprint.
  // When this field is present, the data_type field MUST NOT be STRING or UNDEFINED
  bytes raw_data = 9;

  // Data can be stored inside the protobuf file using type-specific fields or raw_data.
  // Alternatively, raw bytes data can be stored in an external file, using the external_data field.
  // external_data stores key-value pairs describing data location. Recognized keys are:
  // - "location" (required) - POSIX filesystem path relative to the directory where the ONNX
  //                           protobuf model was stored
  // - "offset" (optional) - position of byte at which stored data begins. Integer stored as string.
  //                         Offset values SHOULD be multiples 4096 (page size) to enable mmap support.
  // - "length" (optional) - number of bytes containing data. Integer stored as string.
  // - "checksum" (optional) - SHA1 digest of file specified in under 'location' key.
  repeated StringStringEntryProto external_data = 13;

  // Location of the data for this tensor. MUST be one of:
  // - DEFAULT - data stored inside the protobuf message. Data is stored in raw_data (if set) otherwise in type-specified field.
  // - EXTERNAL - data stored in an external location as described by external_data field.
  enum DataLocation {
    DEFAULT = 0;
    EXTERNAL = 1;
  }

  // If value not set, data is stored in raw_data (if set) otherwise in type-specified field.
  DataLocation data_location = 14;

  // For double
  // Complex128 tensors are encoded as a single array of doubles,
  // with the real components appearing in odd numbered positions,
  // and the corresponding imaginary component appearing in the
  // subsequent even numbered position. (e.g., [1.0 + 2.0i, 3.0 + 4.0i]
  // is encoded as [1.0, 2.0 ,3.0 ,4.0]
  // When this field is present, the data_type field MUST be DOUBLE or COMPLEX128
  repeated double double_data = 10 [packed = true];

  // For uint64 and uint32 values
  // When this field is present, the data_type field MUST be
  // UINT32 or UINT64
  repeated uint64 uint64_data = 11 [packed = true];

  // Named metadata values; keys should be distinct.
  repeated StringStringEntryProto metadata_props = 16;
}

// A serialized sparse-tensor value
message SparseTensorProto {
  // The sequence of non-default values are encoded as a tensor of shape [NNZ].
  // The default-value is zero for numeric tensors, and empty-string for string tensors.
  // values must have a non-empty name present which serves as a name for SparseTensorProto
  // when used in sparse_initializer list.
  TensorProto values = 1;

  // The indices of the non-default values, which may be stored in one of two formats.
  // (a) Indices can be a tensor of shape [NNZ, rank] with the [i,j]-th value
  // corresponding to the j-th index of the i-th value (in the values tensor).
  // (b) Indices can be a tensor of shape [NNZ], in which case the i-th value
  // must be the linearized-index of the i-th value (in the values tensor).
  // The linearized-index can be converted into an index tuple (k_1,...,k_rank)
  // using the shape provided below.
  // The indices must appear in ascending order without duplication.
  // In the first format, the ordering is lexicographic-ordering:
  // e.g., index-value [1,4] must appear before [2,1]
  TensorProto indices = 2;

  // The shape of the underlying dense-tensor: [dim_1, dim_2, ... dim_rank]
  repeated int64 dims = 3;
}

// Defines a tensor shape. A dimension can be either an integer value
// or a symbolic variable. A symbolic variable represents an unknown
// dimension.
message TensorShapeProto {
  message Dimension {
    oneof value {
      int64 dim_value = 1;
      string dim_param = 2;   // namespace Shape
    };
    // Standard denotation can optionally be used to denote tensor
    // dimensions with standard semantic descriptions to ensure
    // that operations are applied to the correct axis of a tensor.
    // Refer to https://github.com/onnx/onnx/blob/main/docs/DimensionDenotation.md#denotation-definition
    // for pre-defined dimension denotations.
    string denotation = 3;
  };
  repeated Dimension dim = 1;
}

// Types
//
// The standard ONNX data types.
message TypeProto {

  message Tensor {
    // This field MUST NOT have the value of UNDEFINED
    // This field MUST have a valid TensorProto.DataType value
    // This field MUST be present for this version of the IR.
    int32 elem_type = 1;
    TensorShapeProto shape = 2;
  }

  // repeated T
  message Sequence {
    // The type and optional shape of each element of the sequence.
    // This field MUST be present for this version of the IR.
    TypeProto elem_type = 1;
  };

  // map<K,V>
  message Map {
    // This field MUST have a valid TensorProto.DataType value
    // This field MUST be present for this version of the IR.
    // This field MUST refer to an integral type ([U]INT{8|16|32|64}) or STRING
    int32 key_type = 1;
    // This field MUST be present for this version of the IR.
    TypeProto value_type = 2;
  };

  // wrapper for Tensor, Sequence, or Map
  message Optional {
    // The type and optional shape of the element wrapped.
    // This field MUST be present for this version of the IR.
    // Possible values correspond to OptionalProto.DataType enum
    TypeProto elem_type = 1;
  };


  message SparseTensor {
    // This field MUST NOT have the value of UNDEFINED
    // This field MUST have a valid TensorProto.DataType value
    // This field MUST be present for this version of the IR.
    int32 elem_type = 1;
    TensorShapeProto shape = 2;
  }


  oneof value {
    // The type of a tensor.
    Tensor tensor_type = 1;

    // NOTE:  DNN-only implementations of ONNX MAY elect to not support non-tensor values
    //        as input and output to graphs and nodes. These types are needed to naturally
    //        support classical ML operators.  DNN operators SHOULD restrict their input
    //        and output types to tensors.

    // The type of a sequence.
    Sequence sequence_type = 4;

    // The type of a map.
    Map map_type = 5;

    // The type of an optional.
    Optional optional_type = 9;


    // Type of the sparse tensor
    SparseTensor sparse_tensor_type = 8;

  }

  // An optional denotation can be used to denote the whole
  // type with a standard semantic description as to what is
  // stored inside. Refer to https://github.com/onnx/onnx/blob/main/docs/TypeDenotation.md#type-denotation-definition
  // for pre-defined type denotations.
  string denotation = 6;
}

// Operator Sets
//
// OperatorSets are uniquely identified by a (domain, opset_version) pair.
message OperatorSetIdProto {
  // The domain of the operator set being identified.
  // The empty string ("") or absence of this field implies the operator
  // set that is defined as part of the ONNX specification.
  // This field MUST be present in this version of the IR when referring to any other operator set.
  string domain = 1;

  // The version of the operator set being identified.
  // This field MUST be present in this version of the IR.
  int64 version = 2;
}

// Operator/function status.
enum OperatorStatus {
    EXPERIMENTAL = 0;
    STABLE = 1;
}

message FunctionProto {
  // The name of the function, similar to op_type in NodeProto.
  // This is part of the unique-id (domain, name, overload) of FunctionProtos in a model.
  string name = 1;

  // Deprecated since IR Version 8
  // optional int64 since_version = 2;
  reserved 2;
  reserved "since_version";

  // Deprecated since IR Version 8
  // optional OperatorStatus status = 3;
  reserved 3;
  reserved "status";

  // The inputs and outputs of the function.
  repeated string input = 4;
  repeated string output = 5;

  // The attribute parameters of the function.
  // It is for function parameters without default values.
  repeated string attribute = 6;

  // The attribute protos of the function.
  // It is for function attributes with default values.
  // A function attribute shall be represented either as
  // a string attribute or an AttributeProto, not both.
  repeated AttributeProto attribute_proto = 11;

  // The nodes in the function.
  repeated NodeProto node = 7;
  // A human-readable documentation for this function. Markdown is allowed.
  string doc_string = 8;

  // The OperatorSets this function body (graph) relies on.
  //
  // All nodes in the function body (graph) will bind against the operator
  // with the same-domain/same-op_type operator with the HIGHEST version
  // in the referenced operator sets. This means at most one version can be relied
  // for one domain.
  //
  // The operator sets imported by FunctionProto should be compatible with the ones
  // imported by ModelProto. Example, if same operator set say 'A' is imported by FunctionProto
  // and ModelProto then versions for the operator set may be different but,
  // the operator schema returned for op_type, domain, version combination
  // for both the versions should be same.

  repeated OperatorSetIdProto opset_import = 9;

  // The domain which this function belongs to.
  // This is part of the unique-id (domain, name, overload) of FunctionProtos in a model.
  string domain = 10;

  // The overload identifier of the function.
  // This is part of the unique-id (domain, name, overload) of FunctionProtos in a model.
  string overload = 13;

  // Information for the values in the function. The ValueInfoProto.name's
  // must be distinct and refer to names in the function (including inputs,
  // outputs, and intermediate values). It is optional for a value to appear
  // in value_info list.
  repeated ValueInfoProto value_info = 12;

  // Named metadata values; keys should be distinct.
  repeated StringStringEntryProto metadata_props = 14;
}

// For using protobuf-lite
option optimize_for = LITE_RUNTIME;



================================================
FILE: crates/dsperse/src/backend/jstprove.rs
================================================
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};

pub use jstprove_circuits::api::ExtractedOutputType as ExtractedOutput;
pub use jstprove_circuits::api::ProofConfigType as ProofConfig;
pub use jstprove_circuits::api::StampedProofConfigType as StampedProofConfig;
pub use jstprove_circuits::api::VerifiedOutputType as VerifiedOutput;
use jstprove_circuits::api::{
    self, ArchitectureType as Architecture, CircuitParamsType as CircuitParams,
    CompiledCircuitType as CompiledCircuit, WANDBType as WANDB,
};
use jstprove_circuits::runner::schema::WitnessRequest;

use crate::error::{DsperseError, Result};

use super::traits::ProofBackend;

#[derive(Debug)]
pub struct JstproveBackend {
    compress: bool,
    bundle_cache: Mutex<HashMap<PathBuf, Arc<CompiledCircuit>>>,
}

impl Default for JstproveBackend {
    fn default() -> Self {
        Self {
            compress: true,
            bundle_cache: Mutex::new(HashMap::new()),
        }
    }
}

impl JstproveBackend {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn with_compress(mut self, compress: bool) -> Self {
        self.compress = compress;
        self
    }

    pub fn compress(&self) -> bool {
        self.compress
    }

    pub fn load_bundle_cached(&self, path: &Path) -> Result<Arc<CompiledCircuit>> {
        let key = path.canonicalize().unwrap_or_else(|_| path.to_path_buf());

        let mut cache = self
            .bundle_cache
            .lock()
            .map_err(|e| DsperseError::Backend(format!("bundle cache lock poisoned: {e}")))?;
        if let Some(bundle) = cache.get(&key) {
            return Ok(Arc::clone(bundle));
        }
        let bundle = Arc::new(load_bundle(path)?);
        cache.insert(key, Arc::clone(&bundle));

        Ok(bundle)
    }

    pub fn clear_cache(&self) {
        let mut cache = match self.bundle_cache.lock() {
            Ok(cache) => cache,
            Err(e) => {
                tracing::warn!("bundle cache lock poisoned on clear: {e}");
                e.into_inner()
            }
        };
        let count = cache.len();
        cache.clear();
        tracing::debug!(cleared = count, "bundle cache cleared");
    }

    /// Evict cached bundles whose canonical path starts with the given
    /// prefix. Used by callers that want to drop a model's entries
    /// without clearing the entire cache.
    pub fn evict_cache_by_prefix(&self, prefix: &Path) {
        let mut cache = match self.bundle_cache.lock() {
            Ok(cache) => cache,
            Err(e) => {
                tracing::warn!("bundle cache lock poisoned on evict: {e}");
                e.into_inner()
            }
        };
        let before = cache.len();
        cache.retain(|k, _| !k.starts_with(prefix));
        let evicted = before - cache.len();
        if evicted > 0 {
            tracing::info!(
                prefix = %prefix.display(),
                evicted,
                remaining = cache.len(),
                "evicted bundle cache entries"
            );
        }
    }

    /// Resolve the proof config for a freshly loaded bundle. Errors if
    /// the bundle does not carry a stamped proof config or if the
    /// stamped version does not match the current spec, so callers can
    /// fail fast on legacy or incompatible bundles instead of running
    /// the wrong prover.
    fn resolve_proof_config(bundle: &CompiledCircuit) -> Result<ProofConfig> {
        let stamped = bundle
            .metadata
            .as_ref()
            .and_then(|m| m.proof_config)
            .ok_or_else(|| {
                DsperseError::Backend(
                    "circuit bundle has no stamped proof_config; recompile with a stamping prover"
                        .into(),
                )
            })?;
        stamped
            .ensure_current()
            .map_err(|e| DsperseError::Backend(format!("incompatible bundle: {e}")))?;
        Ok(stamped.config)
    }

    /// Resolve the proof config without touching the circuit or
    /// witness-solver blobs. Reads only `manifest.msgpack`, which is
    /// kilobytes versus the tens of megabytes a full bundle load
    /// pulls in. Falls back to `resolve_proof_config` on a full
    /// bundle load if the manifest is missing the stamp so callers
    /// still get the same "no stamped proof_config" error path for
    /// legacy bundles rather than a confusing deserialization
    /// failure.
    fn resolve_proof_config_from_manifest(&self, circuit_path: &Path) -> Result<ProofConfig> {
        match jstprove_io::bundle::read_bundle_metadata::<CircuitParams>(circuit_path) {
            Ok((Some(params), _)) => {
                let stamped = params.proof_config.ok_or_else(|| {
                    DsperseError::Backend(
                        "circuit bundle has no stamped proof_config; recompile with a stamping prover"
                            .into(),
                    )
                })?;
                stamped
                    .ensure_current()
                    .map_err(|e| DsperseError::Backend(format!("incompatible bundle: {e}")))?;
                Ok(stamped.config)
            }
            Ok((None, _)) => {
                let bundle = self.load_bundle_cached(circuit_path)?;
                Self::resolve_proof_config(&bundle)
            }
            Err(e) => {
                // Surface the manifest-read failure so operators
                // investigating a slow verify path or a legacy
                // bundle layout can tell the fast path missed
                // rather than silently eating a parse / IO error.
                tracing::debug!(
                    path = %circuit_path.display(),
                    error = %e,
                    "manifest-only proof_config read failed; falling back to full bundle load"
                );
                let bundle = self.load_bundle_cached(circuit_path)?;
                Self::resolve_proof_config(&bundle)
            }
        }
    }

    pub fn compile(
        &self,
        circuit_path: &Path,
        config: ProofConfig,
        params: CircuitParams,
        architecture: Architecture,
        wandb: WANDB,
    ) -> Result<()> {
        let circuit_path_str = circuit_path
            .to_str()
            .ok_or_else(|| DsperseError::Backend("non-UTF8 circuit path".into()))?;

        api::compile(
            circuit_path_str,
            config,
            params,
            architecture,
            wandb,
            self.compress,
        )
        .map_err(|e| DsperseError::Backend(format!("compile: {e}")))?;

        let key = circuit_path
            .canonicalize()
            .unwrap_or_else(|_| circuit_path.to_path_buf());
        self.bundle_cache
            .lock()
            .map_err(|e| DsperseError::Backend(format!("bundle cache lock poisoned: {e}")))?
            .remove(&key);

        Ok(())
    }

    pub fn witness(
        &self,
        circuit_path: &Path,
        input_json: &[u8],
        output_json: &[u8],
    ) -> Result<Vec<u8>> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;

        let req = WitnessRequest {
            circuit: bundle.circuit.clone(),
            witness_solver: bundle.witness_solver.clone(),
            inputs: input_json.to_vec(),
            outputs: output_json.to_vec(),
            metadata: bundle.metadata.clone(),
        };

        let result = api::witness(config, &req, self.compress)
            .map_err(|e| DsperseError::Backend(format!("witness: {e}")))?;

        Ok(result.witness)
    }

    pub fn witness_f64(
        &self,
        circuit_path: &Path,
        activations: &[f64],
        initializers: &[(Vec<f64>, Vec<usize>)],
    ) -> Result<Vec<u8>> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;
        let params = bundle.metadata.as_ref().ok_or_else(|| {
            DsperseError::Backend(
                "circuit bundle missing metadata (required for quantization)".into(),
            )
        })?;

        let result = api::witness_f64(
            config,
            &bundle.circuit,
            &bundle.witness_solver,
            params,
            activations,
            initializers,
            self.compress,
        )
        .map_err(|e| DsperseError::Backend(format!("witness_f64: {e}")))?;

        Ok(result.witness)
    }

    pub fn load_params(&self, circuit_path: &Path) -> Result<Option<CircuitParams>> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        Ok(bundle.metadata.clone())
    }

    pub fn prove(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Result<Vec<u8>> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;

        api::prove(config, &bundle.circuit, witness_bytes, self.compress)
            .map_err(|e| DsperseError::Backend(format!("prove: {e}")))
    }

    pub fn extract_outputs(
        &self,
        witness_bytes: &[u8],
        num_model_inputs: usize,
    ) -> Result<Vec<f64>> {
        Ok(self
            .extract_outputs_full(witness_bytes, num_model_inputs)?
            .outputs)
    }

    /// Full extracted output bundle: inputs, outputs, and the
    /// witness-stamped scale parameters. Holographic verifiers call
    /// this after `verify_holographic` because the holographic
    /// verify path does not reach through `verify_and_extract`, yet
    /// the validator still needs the declared inputs (to cross-check
    /// against what it sent) and the scale fields (to report the
    /// same `VerifiedOutput` shape the non-holographic path
    /// produces). Keeping `extract_outputs` as a thin wrapper
    /// preserves the existing `Vec<f64>` contract for callers that
    /// only want the outputs.
    pub fn extract_outputs_full(
        &self,
        witness_bytes: &[u8],
        num_model_inputs: usize,
    ) -> Result<ExtractedOutput> {
        if num_model_inputs == 0 {
            return Err(DsperseError::Backend(
                "extract_outputs: num_model_inputs must be > 0".into(),
            ));
        }
        api::extract_outputs(witness_bytes, num_model_inputs)
            .map_err(|e| DsperseError::Backend(format!("extract_outputs: {e}")))
    }

    pub fn verify(
        &self,
        circuit_path: &Path,
        witness_bytes: &[u8],
        proof_bytes: &[u8],
    ) -> Result<bool> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;

        api::verify(config, &bundle.circuit, witness_bytes, proof_bytes)
            .map_err(|e| DsperseError::Backend(format!("verify: {e}")))
    }

    pub fn verify_and_extract(
        &self,
        circuit_path: &Path,
        witness_bytes: &[u8],
        proof_bytes: &[u8],
        num_inputs: usize,
        expected_inputs: Option<&[f64]>,
    ) -> Result<VerifiedOutput> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;

        api::verify_and_extract(
            config,
            &bundle.circuit,
            witness_bytes,
            proof_bytes,
            num_inputs,
            expected_inputs,
        )
        .map_err(|e| DsperseError::Backend(format!("verify_and_extract: {e}")))
    }

    /// Run holographic GKR setup against the compiled circuit at
    /// `circuit_path` and persist the resulting verifying key as
    /// `vk.bin` inside the bundle directory. The bundle is read from
    /// the cache, so callers that just compiled the bundle through
    /// [`Self::compile`] pay only the holographic setup cost on top.
    ///
    /// `setup_holographic_vk` only succeeds when the bundle was
    /// compiled with `ProofConfig::GoldilocksExt4Whir`; the underlying
    /// jstprove API rejects every other config.
    ///
    /// The vk blob is written using the same compression mode as the
    /// rest of the bundle (`Self::compress`) so
    /// `jstprove_io::bundle::read_vk_only` can decode it via the
    /// shared auto-detecting reader.
    pub fn setup_holographic_vk(&self, circuit_path: &Path) -> Result<()> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;

        let vk_bytes = api::setup_holographic_vk(config, &bundle.circuit)
            .map_err(|e| DsperseError::Backend(format!("setup_holographic_vk: {e}")))?;

        let vk_path = circuit_path.join("vk.bin");
        let payload = if self.compress {
            jstprove_io::compress_bytes(&vk_bytes)
                .map_err(|e| DsperseError::Backend(format!("compress vk: {e}")))?
        } else {
            vk_bytes
        };
        std::fs::write(&vk_path, &payload).map_err(|e| DsperseError::io(e, &vk_path))?;
        Ok(())
    }

    /// Generate a holographic GKR proof for an existing bundle and
    /// witness. Like [`Self::setup_holographic_vk`] this requires the
    /// bundle to have been compiled with
    /// `ProofConfig::GoldilocksExt4Whir`.
    pub fn prove_holographic(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Result<Vec<u8>> {
        let bundle = self.load_bundle_cached(circuit_path)?;
        let config = Self::resolve_proof_config(&bundle)?;

        api::prove_holographic(config, &bundle.circuit, witness_bytes)
            .map_err(|e| DsperseError::Backend(format!("prove_holographic: {e}")))
    }

    /// Verify a holographic GKR proof against the bundle's vk.bin.
    /// The vk is read independently of the (much larger) circuit
    /// blob, mirroring the validator-side flow where the verifying
    /// party only ever ships the vk.
    pub fn verify_holographic(&self, circuit_path: &Path, proof_bytes: &[u8]) -> Result<bool> {
        // Verifiers only need the vk and the proof config — the
        // circuit and witness solver blobs are not used downstream.
        // Skip load_bundle_cached here so validators that only ever
        // hold vk.bin + manifest.msgpack (the intended light-weight
        // deployment shape) don't fail with a missing circuit.bin
        // and don't pay the tens-of-megabytes read cost.
        let config = self.resolve_proof_config_from_manifest(circuit_path)?;
        let vk_bytes = jstprove_io::bundle::read_vk_only(circuit_path)
            .map_err(|e| DsperseError::Backend(format!("read vk: {e}")))?;

        api::verify_holographic(config, &vk_bytes, proof_bytes)
            .map_err(|e| DsperseError::Backend(format!("verify_holographic: {e}")))
    }
}

impl ProofBackend for JstproveBackend {
    fn prove(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Result<Vec<u8>> {
        self.prove(circuit_path, witness_bytes)
    }

    fn verify(
        &self,
        circuit_path: &Path,
        witness_bytes: &[u8],
        proof_bytes: &[u8],
    ) -> Result<bool> {
        self.verify(circuit_path, witness_bytes, proof_bytes)
    }

    fn witness_f64(
        &self,
        circuit_path: &Path,
        activations: &[f64],
        initializers: &[(Vec<f64>, Vec<usize>)],
    ) -> Result<Vec<u8>> {
        self.witness_f64(circuit_path, activations, initializers)
    }
}

fn load_bundle(circuit_path: &Path) -> Result<CompiledCircuit> {
    let path_str = circuit_path
        .to_str()
        .ok_or_else(|| DsperseError::Backend("non-UTF8 circuit path".into()))?;

    api::read_circuit_bundle(path_str)
        .map_err(|e| DsperseError::Backend(format!("read circuit bundle: {e}")))
}

pub struct WarmCircuit {
    bundle: Arc<CompiledCircuit>,
    pub params: CircuitParams,
    initializers: Vec<(Vec<f64>, Vec<usize>)>,
    compress: bool,
    config: ProofConfig,
}

impl WarmCircuit {
    pub fn load(
        circuit_path: &Path,
        initializers: Vec<(Vec<f64>, Vec<usize>)>,
        backend: &JstproveBackend,
    ) -> Result<Self> {
        let bundle = backend.load_bundle_cached(circuit_path)?;
        let config = JstproveBackend::resolve_proof_config(&bundle)?;
        let params = bundle
            .metadata
            .clone()
            .ok_or_else(|| DsperseError::Backend("circuit bundle missing metadata".into()))?;
        Ok(Self {
            bundle,
            params,
            initializers,
            compress: backend.compress(),
            config,
        })
    }

    pub fn witness_f64(&self, activations: &[f64]) -> Result<Vec<u8>> {
        let result = api::witness_f64(
            self.config,
            &self.bundle.circuit,
            &self.bundle.witness_solver,
            &self.params,
            activations,
            &self.initializers,
            self.compress,
        )
        .map_err(|e| DsperseError::Backend(format!("witness_f64: {e}")))?;

        Ok(result.witness)
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn bundle_cache_starts_empty() {
        let backend = JstproveBackend::default();
        let cache = backend.bundle_cache.lock().unwrap();
        assert!(cache.is_empty());
    }

    #[test]
    fn backend_constructs_without_proof_config_state() {
        let backend = JstproveBackend::default();
        assert!(backend.compress());
    }

    #[test]
    fn clear_cache_on_empty_succeeds() {
        let backend = JstproveBackend::default();
        backend.clear_cache();
        let cache = backend.bundle_cache.lock().unwrap();
        assert!(cache.is_empty());
    }

    #[test]
    fn clear_cache_removes_entries() {
        let backend = JstproveBackend::default();
        let dummy = Arc::new(CompiledCircuit {
            circuit: vec![1, 2, 3],
            witness_solver: vec![],
            metadata: None,
            version: None,
        });
        backend
            .bundle_cache
            .lock()
            .unwrap()
            .insert(PathBuf::from("/tmp/test-circuit"), dummy);
        assert_eq!(backend.bundle_cache.lock().unwrap().len(), 1);
        backend.clear_cache();
        assert!(backend.bundle_cache.lock().unwrap().is_empty());
    }

    #[test]
    fn load_bundle_cached_returns_error_for_missing_path() {
        let backend = JstproveBackend::default();
        let result = backend.load_bundle_cached(Path::new("/nonexistent/circuit/path"));
        assert!(result.is_err());
        assert!(backend.bundle_cache.lock().unwrap().is_empty());
    }

    #[test]
    fn resolve_proof_config_rejects_unstamped_bundle() {
        let bundle = CompiledCircuit {
            circuit: vec![],
            witness_solver: vec![],
            metadata: None,
            version: None,
        };
        let err = JstproveBackend::resolve_proof_config(&bundle).unwrap_err();
        match err {
            DsperseError::Backend(msg) => {
                assert!(msg.contains("no stamped proof_config"), "{msg}")
            }
            other => panic!("expected Backend error, got {other:?}"),
        }
    }
}


================================================
FILE: crates/dsperse/src/backend/mod.rs
================================================
pub mod jstprove;
pub mod onnx;
pub mod traits;

pub use traits::ProofBackend;


================================================
FILE: crates/dsperse/src/backend/onnx.rs
================================================
use std::collections::HashMap;
use std::path::Path;
use std::sync::Arc;

use ndarray::IxDyn;
use tract_onnx::prelude::*;
use tract_onnx::tract_hir::infer::Factoid;

use crate::error::{DsperseError, Result};

pub fn coerce_tdim_inputs(inputs: &TVec<TValue>) -> TVec<TValue> {
    inputs
        .iter()
        .map(|t| {
            if t.datum_type() == DatumType::TDim {
                // Safety: datum_type() == TDim verified by outer condition
                let view = unsafe { t.as_slice_unchecked::<TDim>() };
                let vals: Vec<i64> = view.iter().map(|d| d.to_i64().unwrap_or(0)).collect();
                Tensor::from_shape(t.shape(), &vals)
                    .map(|t| t.into_tvalue())
                    .unwrap_or_else(|_| t.clone())
            } else {
                t.clone()
            }
        })
        .collect()
}

pub type NamedOutputs = HashMap<String, (Vec<f64>, Vec<usize>)>;

fn load_onnx_model(onnx_path: &Path) -> Result<InferenceModel> {
    tract_onnx::onnx()
        .model_for_path(onnx_path)
        .map_err(|e| DsperseError::Onnx(format!("load {}: {e}", onnx_path.display())))
}

fn resolve_concrete_shape(model: &InferenceModel, input_shape: &[usize]) -> Result<Vec<usize>> {
    let model_shape = model
        .input_fact(0)
        .ok()
        .and_then(|f| f.shape.as_concrete_finite().ok().flatten())
        .map(|s| s.to_vec());

    if input_shape.is_empty() {
        return model_shape.ok_or_else(|| {
            DsperseError::Onnx("symbolic input shape — provide explicit shape".into())
        });
    }

    if let Some(ref ms) = model_shape {
        let model_elems: usize = ms.iter().product();
        let input_elems: usize = input_shape.iter().product();
        if input_shape.len() == 1 && ms.len() > 1 && model_elems == input_elems {
            tracing::debug!(
                model_shape = ?ms,
                provided_shape = ?input_shape,
                "reshaping flat input to model-declared shape"
            );
            return Ok(ms.clone());
        }
    }

    Ok(input_shape.to_vec())
}

fn resolve_input_datum_type(model: &InferenceModel, idx: usize) -> Result<DatumType> {
    let fact = model
        .input_fact(idx)
        .map_err(|e| DsperseError::Onnx(format!("input fact at index {idx}: {e}")))?;
    fact.datum_type.concretize().ok_or_else(|| {
        DsperseError::Onnx(format!(
            "input fact at index {idx} has no concrete datum type; the model must declare a concrete element type for this input"
        ))
    })
}

fn optimize_to_runnable(
    model: InferenceModel,
    concrete_shape: &[usize],
    input_dt: DatumType,
) -> Result<Arc<TypedRunnableModel>> {
    model
        .with_input_fact(0, InferenceFact::dt_shape(input_dt, concrete_shape))
        .map_err(|e| DsperseError::Onnx(format!("set input shape: {e}")))?
        .into_optimized()
        .map_err(|e| DsperseError::Onnx(format!("optimize: {e:#}")))?
        .into_runnable()
        .map_err(|e| DsperseError::Onnx(format!("make runnable: {e:#}")))
}

pub fn run_inference_with_coercion(
    onnx_path: &Path,
    input_data: &[f64],
    input_shape: &[usize],
) -> Result<NamedOutputs> {
    let model = load_onnx_model(onnx_path)?;
    let concrete_shape = resolve_concrete_shape(&model, input_shape)?;
    let input_dt = resolve_input_datum_type(&model, 0)?;

    if let Ok(plan) = optimize_to_runnable(model, &concrete_shape, input_dt) {
        let input = build_input_tvalue(input_data, &concrete_shape, input_dt)?;
        let result = plan
            .run(tvec![input])
            .map_err(|e| DsperseError::Onnx(format!("run: {e:#}")))?;
        return extract_all_outputs(&result);
    }

    tracing::warn!("standard optimization failed; using inference plan with TDim coercion");
    let model2 = load_onnx_model(onnx_path)?;
    let with_shape = model2
        .with_input_fact(0, InferenceFact::dt_shape(input_dt, &concrete_shape))
        .map_err(|e| DsperseError::Onnx(format!("set input: {e}")))?;

    let plan =
        tract_onnx::tract_hir::infer::InferenceSimplePlan::new(std::sync::Arc::new(with_shape))
            .map_err(|e| DsperseError::Onnx(format!("inference plan: {e}")))?;
    let mut state = tract_onnx::tract_core::plan::SimpleState::new(&plan)
        .map_err(|e| DsperseError::Onnx(format!("state: {e}")))?;

    let input = build_input_tvalue(input_data, &concrete_shape, input_dt)?;
    let result = state
        .run_plan_with_eval(tvec![input], |session, op_state, node, inputs| {
            let coerced = coerce_tdim_inputs(&inputs);
            let eval_result = if let Some(st) = op_state {
                st.eval(session, node.op.as_op(), coerced)
            } else {
                node.op.eval(coerced)
            };
            match eval_result {
                Ok(o) => Ok::<_, TractError>(o),
                Err(e) => {
                    let Some(first) = inputs.first() else {
                        return Err(e);
                    };
                    tracing::warn!(node = %node.name, error = %e, "eval failed, using fallback");
                    let dt = first.datum_type();
                    let fallback = Tensor::zero_dt(dt, &[1])
                        .map_err(|alloc_err| {
                            TractError::msg(format!(
                                "node {}: eval failed ({e}); fallback allocation for dtype {dt:?} failed: {alloc_err}",
                                node.name
                            ))
                        })?
                        .into_tvalue();
                    let n = node.outputs.len().max(1);
                    Ok((0..n).map(|_| fallback.clone()).collect())
                }
            }
        })
        .map_err(|e| DsperseError::Onnx(format!("inference run: {e:#}")))?;

    extract_all_outputs(&result)
}

fn extract_all_outputs(result: &[TValue]) -> Result<NamedOutputs> {
    let mut outputs = NamedOutputs::new();
    for (i, tv) in result.iter().enumerate() {
        let label = format!("output_{i}");
        let (data, shape) = tvalue_to_f64(tv, &label)?;
        outputs.insert(label, (data, shape));
    }
    Ok(outputs)
}

fn load_runnable(
    onnx_path: &Path,
    input_shape: &[usize],
) -> Result<(Arc<TypedRunnableModel>, Vec<usize>, DatumType)> {
    let model = load_onnx_model(onnx_path)?;
    let concrete_shape = resolve_concrete_shape(&model, input_shape)?;
    let input_dt = resolve_input_datum_type(&model, 0)?;
    let plan = optimize_to_runnable(model, &concrete_shape, input_dt)?;
    Ok((plan, concrete_shape, input_dt))
}

const I64_SAFE_BOUND_F64: f64 = I64_SAFE_BOUND as f64;

fn reject_non_finite(v: f64, idx: usize, type_name: &str) -> Result<()> {
    if !v.is_finite() {
        return Err(DsperseError::Onnx(format!(
            "input[{idx}] = {v}: non-finite values are not accepted for {type_name} inputs"
        )));
    }
    Ok(())
}

fn validate_integer_input(
    v: f64,
    idx: usize,
    type_name: &str,
    type_min: f64,
    type_max: f64,
) -> Result<()> {
    reject_non_finite(v, idx, type_name)?;
    if v.trunc() != v {
        return Err(DsperseError::Onnx(format!(
            "input[{idx}] = {v}: fractional component cannot be represented as {type_name}"
        )));
    }
    if v.abs() > I64_SAFE_BOUND_F64 {
        return Err(DsperseError::Onnx(format!(
            "input[{idx}] = {v}: magnitude exceeds IEEE-754 safe integer bound {I64_SAFE_BOUND}"
        )));
    }
    if v < type_min || v > type_max {
        return Err(DsperseError::Onnx(format!(
            "input[{idx}] = {v}: outside representable range [{type_min}, {type_max}] for {type_name}"
        )));
    }
    Ok(())
}

fn build_input_tvalue(input_data: &[f64], shape: &[usize], dt: DatumType) -> Result<TValue> {
    let f32_max_f64: f64 = f32::MAX as f64;
    macro_rules! build_bounded_int {
        ($t:ty, $name:expr, $min:expr, $max:expr) => {{
            let mut data: Vec<$t> = Vec::with_capacity(input_data.len());
            for (i, &v) in input_data.iter().enumerate() {
                validate_integer_input(v, i, $name, $min as f64, $max as f64)?;
                data.push(v as $t);
            }
            tract_ndarray::ArrayD::from_shape_vec(IxDyn(shape), data)
                .map(|a| a.into_tvalue())
                .map_err(|e| DsperseError::Onnx(format!("input tensor: {e}")))
        }};
    }
    if dt == f32::datum_type() {
        let mut data: Vec<f32> = Vec::with_capacity(input_data.len());
        for (i, &v) in input_data.iter().enumerate() {
            reject_non_finite(v, i, "f32")?;
            if v < -f32_max_f64 || v > f32_max_f64 {
                return Err(DsperseError::Onnx(format!(
                    "input[{i}] = {v}: magnitude exceeds representable f32 range [-{f32_max_f64}, {f32_max_f64}]"
                )));
            }
            data.push(v as f32);
        }
        tract_ndarray::ArrayD::from_shape_vec(IxDyn(shape), data)
            .map(|a| a.into_tvalue())
            .map_err(|e| DsperseError::Onnx(format!("input tensor: {e}")))
    } else if dt == f64::datum_type() {
        let mut data: Vec<f64> = Vec::with_capacity(input_data.len());
        for (i, &v) in input_data.iter().enumerate() {
            reject_non_finite(v, i, "f64")?;
            data.push(v);
        }
        tract_ndarray::ArrayD::from_shape_vec(IxDyn(shape), data)
            .map(|a| a.into_tvalue())
            .map_err(|e| DsperseError::Onnx(format!("input tensor: {e}")))
    } else if dt == u8::datum_type() {
        build_bounded_int!(u8, "u8", u8::MIN, u8::MAX)
    } else if dt == i8::datum_type() {
        build_bounded_int!(i8, "i8", i8::MIN, i8::MAX)
    } else if dt == u16::datum_type() {
        build_bounded_int!(u16, "u16", u16::MIN, u16::MAX)
    } else if dt == i16::datum_type() {
        build_bounded_int!(i16, "i16", i16::MIN, i16::MAX)
    } else if dt == u32::datum_type() {
        build_bounded_int!(u32, "u32", u32::MIN, u32::MAX)
    } else if dt == i32::datum_type() {
        build_bounded_int!(i32, "i32", i32::MIN, i32::MAX)
    } else if dt == u64::datum_type() {
        let mut data: Vec<u64> = Vec::with_capacity(input_data.len());
        for (i, &v) in input_data.iter().enumerate() {
            validate_integer_input(v, i, "u64", 0.0, I64_SAFE_BOUND_F64)?;
            data.push(v as u64);
        }
        tract_ndarray::ArrayD::from_shape_vec(IxDyn(shape), data)
            .map(|a| a.into_tvalue())
            .map_err(|e| DsperseError::Onnx(format!("input tensor: {e}")))
    } else if dt == i64::datum_type() {
        let mut data: Vec<i64> = Vec::with_capacity(input_data.len());
        for (i, &v) in input_data.iter().enumerate() {
            validate_integer_input(v, i, "i64", -I64_SAFE_BOUND_F64, I64_SAFE_BOUND_F64)?;
            data.push(v as i64);
        }
        tract_ndarray::ArrayD::from_shape_vec(IxDyn(shape), data)
            .map(|a| a.into_tvalue())
            .map_err(|e| DsperseError::Onnx(format!("input tensor: {e}")))
    } else if dt == bool::datum_type() {
        let mut data: Vec<bool> = Vec::with_capacity(input_data.len());
        for (i, &v) in input_data.iter().enumerate() {
            reject_non_finite(v, i, "bool")?;
            if v != 0.0 && v != 1.0 {
                return Err(DsperseError::Onnx(format!(
                    "input[{i}] = {v}: bool inputs must be exactly 0 or 1"
                )));
            }
            data.push(v != 0.0);
        }
        tract_ndarray::ArrayD::from_shape_vec(IxDyn(shape), data)
            .map(|a| a.into_tvalue())
            .map_err(|e| DsperseError::Onnx(format!("input tensor: {e}")))
    } else {
        Err(DsperseError::Onnx(format!(
            "unsupported input datum type {dt:?}"
        )))
    }
}

fn run_single(
    plan: &Arc<TypedRunnableModel>,
    input_data: &[f64],
    shape: &[usize],
    dt: DatumType,
) -> Result<TVec<TValue>> {
    let tv = build_input_tvalue(input_data, shape, dt)?;
    plan.run(tvec!(tv))
        .map_err(|e| DsperseError::Onnx(format!("inference: {e}")))
}

pub struct WarmModel {
    plan: Arc<TypedRunnableModel>,
    input_shape: Vec<usize>,
    input_dt: DatumType,
}

impl WarmModel {
    pub fn load(onnx_path: &Path, input_shape: &[usize]) -> Result<Self> {
        let (plan, input_shape, input_dt) = load_runnable(onnx_path, input_shape)?;
        Ok(Self {
            plan,
            input_shape,
            input_dt,
        })
    }

    pub fn run(&self, input_data: &[f64]) -> Result<(Vec<f64>, Vec<usize>)> {
        let result = run_single(&self.plan, input_data, &self.input_shape, self.input_dt)?;
        extract_first_output(&result)
    }
}

pub fn run_inference(
    onnx_path: &Path,
    input_data: &[f64],
    input_shape: &[usize],
) -> Result<(Vec<f64>, Vec<usize>)> {
    let (plan, concrete_shape, input_dt) = load_runnable(onnx_path, input_shape)?;
    let result = run_single(&plan, input_data, &concrete_shape, input_dt)?;
    extract_first_output(&result)
}

pub fn run_inference_named(
    onnx_path: &Path,
    input_data: &[f64],
    input_shape: &[usize],
) -> Result<NamedOutputs> {
    let model = load_onnx_model(onnx_path)?;
    let output_names = collect_output_names(&model);
    let concrete_shape = resolve_concrete_shape(&model, input_shape)?;
    let input_dt = resolve_input_datum_type(&model, 0)?;
    match optimize_to_runnable(model, &concrete_shape, input_dt) {
        Ok(plan) => {
            let result = run_single(&plan, input_data, &concrete_shape, input_dt)?;
            zip_named_outputs(&output_names, &result)
        }
        Err(_) => {
            let mut result = run_inference_with_coercion(onnx_path, input_data, &concrete_shape)?;
            let mut named = NamedOutputs::new();
            for (i, name) in output_names.iter().enumerate() {
                let key = format!("output_{i}");
                if let Some(val) = result.remove(&key) {
                    named.insert(name.clone(), val);
                }
            }
            Ok(named)
        }
    }
}

pub fn run_inference_multi(
    onnx_path: &Path,
    inputs: &[(&str, Vec<f64>, Vec<usize>)],
) -> Result<(Vec<f64>, Vec<usize>)> {
    let (result, _) = run_multi_inner(onnx_path, inputs)?;
    extract_first_output(&result)
}

pub fn run_inference_multi_named(
    onnx_path: &Path,
    inputs: &[(&str, Vec<f64>, Vec<usize>)],
) -> Result<NamedOutputs> {
    let (result, output_names) = run_multi_inner(onnx_path, inputs)?;
    zip_named_outputs(&output_names, &result)
}

fn run_multi_inner(
    onnx_path: &Path,
    inputs: &[(&str, Vec<f64>, Vec<usize>)],
) -> Result<(TVec<TValue>, Vec<String>)> {
    let mut model = load_onnx_model(onnx_path)?;

    let output_names = collect_output_names(&model);

    let mut input_by_name: HashMap<&str, usize> = HashMap::with_capacity(inputs.len());
    for (idx, (name, _, _)) in inputs.iter().enumerate() {
        if input_by_name.insert(*name, idx).is_some() {
            return Err(DsperseError::Onnx(format!(
                "duplicate provided input name '{name}'"
            )));
        }
    }

    let model_input_count = model.inputs.len();
    let model_input_names: Vec<(usize, String)> = model
        .inputs
        .iter()
        .enumerate()
        .map(|(i, outlet)| (i, model.nodes[outlet.node].name.clone()))
        .collect();

    let mut input_order: Vec<Option<usize>> = vec![None; model_input_count];
    let mut input_dts: Vec<Option<DatumType>> = vec![None; model_input_count];
    for (i, name) in &model_input_names {
        if let Some(&provided_idx) = input_by_name.get(name.as_str()) {
            let dt = resolve_input_datum_type(&model, *i)?;
            model = model
                .with_input_fact(*i, InferenceFact::dt_shape(dt, &inputs[provided_idx].2))
                .map_err(|e| DsperseError::Onnx(format!("set input {i} ({name}) shape: {e}")))?;
            input_order[*i] = Some(provided_idx);
            input_dts[*i] = Some(dt);
        }
    }

    let unknown_inputs: Vec<&str> = input_by_name
        .keys()
        .copied()
        .filter(|name| !model_input_names.iter().any(|(_, n)| n == *name))
        .collect();
    if !unknown_inputs.is_empty() {
        return Err(DsperseError::Onnx(format!(
            "provided inputs not present in model: {unknown_inputs:?}"
        )));
    }

    let model = model
        .into_typed()
        .map_err(|e| {
            let unmatched: Vec<_> = input_order
                .iter()
                .enumerate()
                .filter(|(_, v)| v.is_none())
                .map(|(i, _)| model_input_names[i].1.as_str())
                .collect();
            DsperseError::Onnx(format!("type analysis (unmatched: {unmatched:?}): {e}"))
        })?
        .into_optimized()
        .map_err(|e| DsperseError::Onnx(format!("optimize: {e:#}")))?
        .into_runnable()
        .map_err(|e| DsperseError::Onnx(format!("make runnable: {e:#}")))?;

    let mut input_tvs = TVec::new();
    for (model_idx, idx) in input_order.iter().enumerate() {
        let provided_idx = idx.ok_or_else(|| {
            let name = &model_input_names[model_idx].1;
            DsperseError::Onnx(format!(
                "model input {model_idx} ('{name}') not matched to provided tensors"
            ))
        })?;
        let dt = input_dts[model_idx].ok_or_else(|| {
            let name = &model_input_names[model_idx].1;
            DsperseError::Onnx(format!(
                "model input {model_idx} ('{name}') has no resolved datum type"
            ))
        })?;
        let (_, ref data, ref shape) = inputs[provided_idx];
        input_tvs.push(build_input_tvalue(data, shape, dt)?);
    }

    let result = model
        .run(input_tvs)
        .map_err(|e| DsperseError::Onnx(format!("inference: {e}")))?;

    Ok((result, output_names))
}

fn collect_output_names(model: &InferenceModel) -> Vec<String> {
    model
        .outputs
        .iter()
        .map(|outlet| {
            model
                .outlet_label(*outlet)
                .map(String::from)
                .unwrap_or_else(|| {
                    format!("{}_output_{}", model.nodes[outlet.node].name, outlet.slot)
                })
        })
        .collect()
}

const I64_SAFE_BOUND: i64 = 9_007_199_254_740_992;

fn i64_to_f64_checked(v: i64, label: &str) -> Result<f64> {
    if v.abs() > I64_SAFE_BOUND {
        return Err(DsperseError::Onnx(format!(
            "{label}: i64 value {v} exceeds IEEE-754 safe integer bound"
        )));
    }
    Ok(v as f64)
}

fn u64_to_f64_checked(v: u64, label: &str) -> Result<f64> {
    if v > I64_SAFE_BOUND as u64 {
        return Err(DsperseError::Onnx(format!(
            "{label}: u64 value {v} exceeds IEEE-754 safe integer bound"
        )));
    }
    Ok(v as f64)
}

fn tvalue_to_f64(tv: &TValue, label: &str) -> Result<(Vec<f64>, Vec<usize>)> {
    let shape = tv.shape().to_vec();
    let dt = tv.datum_type();
    let data: Vec<f64> = if dt == f32::datum_type() {
        let arr = tv
            .to_plain_array_view::<f32>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == f64::datum_type() {
        let arr = tv
            .to_plain_array_view::<f64>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().copied().collect()
    } else if dt == i64::datum_type() {
        let arr = tv
            .to_plain_array_view::<i64>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter()
            .map(|&v| i64_to_f64_checked(v, label))
            .collect::<Result<Vec<_>>>()?
    } else if dt == i32::datum_type() {
        let arr = tv
            .to_plain_array_view::<i32>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == u32::datum_type() {
        let arr = tv
            .to_plain_array_view::<u32>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == i16::datum_type() {
        let arr = tv
            .to_plain_array_view::<i16>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == u16::datum_type() {
        let arr = tv
            .to_plain_array_view::<u16>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == i8::datum_type() {
        let arr = tv
            .to_plain_array_view::<i8>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == u8::datum_type() {
        let arr = tv
            .to_plain_array_view::<u8>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| f64::from(v)).collect()
    } else if dt == u64::datum_type() {
        let arr = tv
            .to_plain_array_view::<u64>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter()
            .map(|&v| u64_to_f64_checked(v, label))
            .collect::<Result<Vec<_>>>()?
    } else if dt == bool::datum_type() {
        let arr = tv
            .to_plain_array_view::<bool>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter().map(|&v| if v { 1.0 } else { 0.0 }).collect()
    } else if dt.is_tdim() {
        let casted = tv
            .cast_to::<i64>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: TDim->i64 cast: {e}")))?;
        let arr = casted
            .to_plain_array_view::<i64>()
            .map_err(|e| DsperseError::Onnx(format!("{label}: {e}")))?;
        arr.iter()
            .map(|&v| i64_to_f64_checked(v, label))
            .collect::<Result<Vec<_>>>()?
    } else {
        return Err(DsperseError::Onnx(format!(
            "{label}: unsupported datum type {dt:?}"
        )));
    };
    Ok((data, shape))
}

fn zip_named_outputs(names: &[String], result: &[TValue]) -> Result<NamedOutputs> {
    let mut map = HashMap::new();
    for (i, tv) in result.iter().enumerate() {
        let (data, shape) = tvalue_to_f64(tv, &format!("output {i}"))?;
        let name = names
            .get(i)
            .cloned()
            .unwrap_or_else(|| format!("output_{i}"));
        if map.insert(name.clone(), (data, shape)).is_some() {
            return Err(DsperseError::Onnx(format!(
                "duplicate output name '{name}'"
            )));
        }
    }
    Ok(map)
}

fn extract_first_output(result: &[TValue]) -> Result<(Vec<f64>, Vec<usize>)> {
    let output = result
        .first()
        .ok_or_else(|| DsperseError::Onnx("no output from model".into()))?;
    tvalue_to_f64(output, "output tensor")
}

#[cfg(test)]
mod tests {
    use super::*;

    const TEST_OPS: &[&str] = &["Conv", "Gemm", "MatMul"];

    #[test]
    fn run_inference_on_sliced_model() {
        let models_dir = std::path::PathBuf::from(concat!(
            env!("CARGO_MANIFEST_DIR"),
            "/../../tests/models/net"
        ));
        let model_path = models_dir.join("model.onnx");
        assert!(
            model_path.exists(),
            "fixture missing: {}",
            model_path.display()
        );
        let tmp = tempfile::tempdir().unwrap();
        let meta = crate::slicer::slice_model(&model_path, Some(tmp.path()), None, TEST_OPS, None)
            .expect("slice_model failed");
        crate::slicer::materializer::ensure_all_slices_materialized(tmp.path(), &meta)
            .expect("materialization failed");
        assert!(!meta.slices.is_empty(), "model produced zero slices");
        let first_slice = &meta.slices[0];
        let onnx_path = tmp
            .path()
            .join(format!("slice_0/payload/{}", first_slice.filename));
        assert!(
            onnx_path.exists(),
            "sliced ONNX missing: {}",
            onnx_path.display()
        );
        let input_shape = &first_slice.shape.tensor_shape.input;
        assert!(
            !input_shape.is_empty() && !input_shape[0].is_empty(),
            "empty input shape"
        );
        let shape: Vec<usize> = input_shape[0].iter().map(|&d| d.max(1) as usize).collect();
        let elem_count: usize = shape.iter().product();
        let input_data = vec![0.0f64; elem_count];
        let result = run_inference(&onnx_path, &input_data, &shape);
        assert!(result.is_ok());
        let (output_data, output_shape) = result.unwrap();
        assert!(!output_data.is_empty());
        assert!(!output_shape.is_empty());
    }

    #[test]
    fn run_inference_nonexistent_model() {
        let result = run_inference(Path::new("/nonexistent/model.onnx"), &[1.0], &[1]);
        assert!(result.is_err());
    }

    #[test]
    fn warm_model_load_nonexistent() {
        let result = WarmModel::load(Path::new("/nonexistent/model.onnx"), &[1, 1, 28, 28]);
        assert!(result.is_err());
    }

    #[test]
    fn warm_model_load_and_run_on_slice() {
        let models_dir = std::path::PathBuf::from(concat!(
            env!("CARGO_MANIFEST_DIR"),
            "/../../tests/models/net"
        ));
        let model_path = models_dir.join("model.onnx");
        assert!(
            model_path.exists(),
            "fixture missing: {}",
            model_path.display()
        );
        let tmp = tempfile::tempdir().unwrap();
        let meta = crate::slicer::slice_model(&model_path, Some(tmp.path()), None, TEST_OPS, None)
            .expect("slice_model failed");
        crate::slicer::materializer::ensure_all_slices_materialized(tmp.path(), &meta)
            .expect("materialization failed");
        assert!(!meta.slices.is_empty(), "model produced zero slices");
        let first_slice = &meta.slices[0];
        let onnx_path = tmp
            .path()
            .join(format!("slice_0/payload/{}", first_slice.filename));
        assert!(
            onnx_path.exists(),
            "sliced ONNX missing: {}",
            onnx_path.display()
        );
        let input_shape = &first_slice.shape.tensor_shape.input;
        assert!(
            !input_shape.is_empty() && !input_shape[0].is_empty(),
            "empty input shape"
        );
        let shape: Vec<usize> = input_shape[0].iter().map(|&d| d.max(1) as usize).collect();
        let elem_count: usize = shape.iter().product();

        let warm = WarmModel::load(&onnx_path, &shape).expect("WarmModel::load failed");
        let input = vec![0.0f64; elem_count];
        let (data1, shape1) = warm.run(&input).unwrap();
        let (data2, shape2) = warm.run(&input).unwrap();
        assert!(!data1.is_empty());
        assert_eq!(shape1, shape2);
        assert_eq!(data1, data2);
    }

    #[test]
    fn zip_named_outputs_empty() {
        let result = zip_named_outputs(&[], &[]).unwrap();
        assert!(result.is_empty());
    }

    #[test]
    fn extract_first_output_empty() {
        let result = extract_first_output(&[]);
        assert!(result.is_err());
    }

    #[test]
    fn build_input_tvalue_respects_declared_dtypes() {
        let shape = [2usize, 3];
        let values: Vec<f64> = (0..6).map(|v| v as f64).collect();

        let tv_f32 = build_input_tvalue(&values, &shape, f32::datum_type()).unwrap();
        assert_eq!(tv_f32.datum_type(), f32::datum_type());
        assert_eq!(tv_f32.shape(), &shape);

        let tv_u8 = build_input_tvalue(&values, &shape, u8::datum_type()).unwrap();
        assert_eq!(tv_u8.datum_type(), u8::datum_type());

        let tv_i64 = build_input_tvalue(&values, &shape, i64::datum_type()).unwrap();
        assert_eq!(tv_i64.datum_type(), i64::datum_type());

        let bool_vals = vec![0.0, 1.0, 0.0, 1.0, 0.0, 1.0];
        let tv_bool = build_input_tvalue(&bool_vals, &shape, bool::datum_type()).unwrap();
        assert_eq!(tv_bool.datum_type(), bool::datum_type());
        let view = tv_bool.to_plain_array_view::<bool>().unwrap();
        assert_eq!(
            view.iter().copied().collect::<Vec<_>>(),
            vec![false, true, false, true, false, true]
        );

        let unsupported = build_input_tvalue(&values, &shape, DatumType::String);
        assert!(unsupported.is_err());
    }

    #[test]
    fn build_input_tvalue_rejects_non_finite() {
        let shape = [3usize];
        for dt in [
            f32::datum_type(),
            f64::datum_type(),
            u8::datum_type(),
            i64::datum_type(),
            bool::datum_type(),
        ] {
            for bad in [f64::NAN, f64::INFINITY, f64::NEG_INFINITY] {
                let err = build_input_tvalue(&[0.0, bad, 1.0], &shape, dt).unwrap_err();
                let msg = format!("{err:?}");
                assert!(
                    msg.contains("non-finite"),
                    "expected non-finite error for dt={dt:?} val={bad}, got {msg}"
                );
            }
        }
    }

    #[test]
    fn build_input_tvalue_rejects_fractional_for_integer_dtypes() {
        let shape = [2usize];
        for dt in [
            u8::datum_type(),
            i8::datum_type(),
            u32::datum_type(),
            i32::datum_type(),
            i64::datum_type(),
            u64::datum_type(),
        ] {
            let err = build_input_tvalue(&[0.0, 1.5], &shape, dt).unwrap_err();
            let msg = format!("{err:?}");
            assert!(
                msg.contains("fractional"),
                "expected fractional error for dt={dt:?}, got {msg}"
            );
        }
    }

    #[test]
    fn build_input_tvalue_rejects_out_of_range_for_integer_dtypes() {
        let shape = [2usize];
        let cases: &[(DatumType, f64)] = &[
            (u8::datum_type(), 256.0),
            (u8::datum_type(), -1.0),
            (i8::datum_type(), 128.0),
            (i8::datum_type(), -129.0),
            (u16::datum_type(), -1.0),
            (i16::datum_type(), 32_768.0),
            (u32::datum_type(), -1.0),
        ];
        for (dt, bad) in cases.iter().copied() {
            let err = build_input_tvalue(&[0.0, bad], &shape, dt).unwrap_err();
            let msg = format!("{err:?}");
            assert!(
                msg.contains("outside"),
                "expected range error for dt={dt:?} val={bad}, got {msg}"
            );
        }
    }

    #[test]
    fn safe_integer_bound_is_inclusive_on_both_sides() {
        let shape = [3usize];
        let bound = I64_SAFE_BOUND as f64;
        build_input_tvalue(&[0.0, bound, -bound], &shape, i64::datum_type())
            .expect("i64 accepts +/- I64_SAFE_BOUND");
        build_input_tvalue(&[0.0, bound, 1.0], &shape, u64::datum_type())
            .expect("u64 accepts I64_SAFE_BOUND");

        i64_to_f64_checked(I64_SAFE_BOUND, "i64")
            .expect("i64_to_f64_checked accepts I64_SAFE_BOUND");
        i64_to_f64_checked(-I64_SAFE_BOUND, "i64")
            .expect("i64_to_f64_checked accepts -I64_SAFE_BOUND");
        u64_to_f64_checked(I64_SAFE_BOUND as u64, "u64")
            .expect("u64_to_f64_checked accepts I64_SAFE_BOUND");

        assert!(i64_to_f64_checked(I64_SAFE_BOUND + 1, "i64").is_err());
        assert!(u64_to_f64_checked(I64_SAFE_BOUND as u64 + 1, "u64").is_err());
    }

    #[test]
    fn build_input_tvalue_rejects_i64_above_safe_integer_bound() {
        let shape = [2usize];
        let unsafe_hi = (I64_SAFE_BOUND as f64) + 1024.0;
        let err = build_input_tvalue(&[0.0, unsafe_hi], &shape, i64::datum_type()).unwrap_err();
        let msg = format!("{err:?}");
        assert!(
            msg.contains("safe integer bound"),
            "expected safe-integer-bound error, got {msg}"
        );
    }

    #[test]
    fn build_input_tvalue_rejects_finite_f64_outside_f32_range() {
        let shape = [2usize];
        for bad in [1.0e40_f64, -1.0e40_f64] {
            assert!(bad.is_finite());
            let err = build_input_tvalue(&[0.0, bad], &shape, f32::datum_type()).unwrap_err();
            let msg = format!("{err:?}");
            assert!(
                msg.contains("representable f32 range"),
                "expected f32-range error for val={bad}, got {msg}"
            );
        }
        let ok = build_input_tvalue(
            &[0.0, f32::MAX as f64, -(f32::MAX as f64)],
            &[3],
            f32::datum_type(),
        )
        .unwrap();
        let view = ok.to_plain_array_view::<f32>().unwrap();
        assert!(view.iter().all(|v| v.is_finite()));
    }

    #[test]
    fn build_input_tvalue_rejects_non_boolean_for_bool_dtype() {
        let shape = [2usize];
        let err = build_input_tvalue(&[0.0, 2.0], &shape, bool::datum_type()).unwrap_err();
        let msg = format!("{err:?}");
        assert!(
            msg.contains("bool inputs must be exactly 0 or 1"),
            "expected strict bool error, got {msg}"
        );
    }

    fn write_uint8_cast_to_float_model(path: &Path) {
        use crate::slicer::onnx_proto;
        let input = onnx_proto::make_tensor_value_info("x", 2, &[3]); // 2 = UINT8
        let output = onnx_proto::make_tensor_value_info("y", 1, &[3]); // 1 = FLOAT
        let cast_to = onnx_proto::make_attribute_int("to", 1);
        let node = onnx_proto::make_node(
            "Cast",
            vec!["x".to_string()],
            vec!["y".to_string()],
            vec![cast_to],
        );
        let graph = onnx_proto::make_graph("g", vec![node], vec![input], vec![output], vec![]);
        let model = onnx_proto::make_model(graph, 13);
        onnx_proto::save_model(&model, path).unwrap();
    }

    fn write_uint8_identity_model(path: &Path) {
        use crate::slicer::onnx_proto;
        let input = onnx_proto::make_tensor_value_info("x", 2, &[3]); // UINT8
        let output = onnx_proto::make_tensor_value_info("y", 2, &[3]); // UINT8
        let node = onnx_proto::make_node(
            "Identity",
            vec!["x".to_string()],
            vec!["y".to_string()],
            vec![],
        );
        let graph = onnx_proto::make_graph("g", vec![node], vec![input], vec![output], vec![]);
        let model = onnx_proto::make_model(graph, 13);
        onnx_proto::save_model(&model, path).unwrap();
    }

    #[test]
    fn warm_model_decodes_uint8_output() {
        let tmp = tempfile::tempdir().unwrap();
        let onnx_path = tmp.path().join("u8_identity.onnx");
        write_uint8_identity_model(&onnx_path);

        let shape = [3usize];
        let warm = WarmModel::load(&onnx_path, &shape).expect("WarmModel::load");
        assert_eq!(warm.input_dt, u8::datum_type());
        let (data, out_shape) = warm.run(&[0.0, 128.0, 255.0]).unwrap();
        assert_eq!(out_shape, shape.to_vec());
        assert_eq!(data, vec![0.0, 128.0, 255.0]);
    }

    #[test]
    fn tvalue_to_f64_covers_added_integer_dtypes() {
        fn tv_of<T: Datum>(values: &[T]) -> TValue {
            let arr =
                tract_ndarray::ArrayD::from_shape_vec(IxDyn(&[values.len()]), values.to_vec())
                    .unwrap();
            arr.into_tvalue()
        }
        let (d, s) = tvalue_to_f64(&tv_of::<u8>(&[0, 255]), "u8").unwrap();
        assert_eq!((d, s), (vec![0.0, 255.0], vec![2]));
        let (d, _) = tvalue_to_f64(&tv_of::<i8>(&[-128, 127]), "i8").unwrap();
        assert_eq!(d, vec![-128.0, 127.0]);
        let (d, _) = tvalue_to_f64(&tv_of::<u16>(&[0, 65_535]), "u16").unwrap();
        assert_eq!(d, vec![0.0, 65_535.0]);
        let (d, _) = tvalue_to_f64(&tv_of::<i16>(&[-32_768, 32_767]), "i16").unwrap();
        assert_eq!(d, vec![-32_768.0, 32_767.0]);
        let (d, _) = tvalue_to_f64(&tv_of::<u32>(&[0, u32::MAX]), "u32").unwrap();
        assert_eq!(d, vec![0.0, u32::MAX as f64]);
        let (d, _) = tvalue_to_f64(&tv_of::<u64>(&[0, 1_000_000]), "u64").unwrap();
        assert_eq!(d, vec![0.0, 1_000_000.0]);

        let unsafe_hi = (I64_SAFE_BOUND as u64) + 7;
        let err = tvalue_to_f64(&tv_of::<u64>(&[unsafe_hi]), "u64").unwrap_err();
        assert!(
            format!("{err:?}").contains("safe integer bound"),
            "expected u64 safe-bound error"
        );
    }

    #[test]
    fn warm_model_runs_non_f32_input_through_planner() {
        let tmp = tempfile::tempdir().unwrap();
        let onnx_path = tmp.path().join("u8_cast.onnx");
        write_uint8_cast_to_float_model(&onnx_path);

        let shape = [3usize];
        let warm = WarmModel::load(&onnx_path, &shape).expect("WarmModel::load");
        assert_eq!(warm.input_dt, u8::datum_type());
        let (data, out_shape) = warm.run(&[0.0, 42.0, 255.0]).unwrap();
        assert_eq!(out_shape, shape.to_vec());
        assert_eq!(data, vec![0.0, 42.0, 255.0]);

        // A second call with a value that can't round-trip through u8 must error
        // from build_input_tvalue before the planner is invoked.
        let err = warm.run(&[0.0, 256.0, 0.0]).unwrap_err();
        assert!(format!("{err:?}").contains("outside"));
    }

    #[test]
    fn run_inference_multi_honors_per_input_dtype() {
        let tmp = tempfile::tempdir().unwrap();
        let onnx_path = tmp.path().join("u8_cast.onnx");
        write_uint8_cast_to_float_model(&onnx_path);

        let inputs: Vec<(&str, Vec<f64>, Vec<usize>)> = vec![("x", vec![1.0, 2.0, 3.0], vec![3])];
        let out = run_inference_multi_named(&onnx_path, &inputs).unwrap();
        let (data, shape) = out.values().next().expect("at least one output");
        assert_eq!(shape, &vec![3]);
        assert_eq!(data, &vec![1.0, 2.0, 3.0]);
    }

    #[test]
    fn resolve_input_datum_type_reads_concrete_model_dtype() {
        let tmp = tempfile::tempdir().unwrap();
        let onnx_path = tmp.path().join("u8_cast.onnx");
        write_uint8_cast_to_float_model(&onnx_path);
        let model = load_onnx_model(&onnx_path).unwrap();
        let dt = resolve_input_datum_type(&model, 0).unwrap();
        assert_eq!(dt, u8::datum_type());
    }
}


================================================
FILE: crates/dsperse/src/backend/traits.rs
================================================
use std::path::Path;

use crate::error::Result;

pub trait ProofBackend: Send + Sync {
    fn prove(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Result<Vec<u8>>;

    fn verify(&self, circuit_path: &Path, witness_bytes: &[u8], proof_bytes: &[u8])
    -> Result<bool>;

    fn witness_f64(
        &self,
        circuit_path: &Path,
        activations: &[f64],
        initializers: &[(Vec<f64>, Vec<usize>)],
    ) -> Result<Vec<u8>>;
}


================================================
FILE: crates/dsperse/src/cli/mod.rs
================================================
use std::num::NonZeroUsize;
use std::path::{Path, PathBuf};

use clap::{Args, Parser, Subcommand};

use crate::backend::jstprove::{JstproveBackend, ProofConfig};
use crate::error::{DsperseError, Result};
use crate::pipeline::{self, RunConfig};

use jstprove_circuits::api::{ProofConfigError, ProofSystemType as ProofSystem};

fn parse_proof_config(value: &str) -> Result<ProofConfig> {
    value.parse().map_err(|e: ProofConfigError| {
        DsperseError::Other(format!("invalid --curve '{value}': {e}"))
    })
}

pub const VERSION: &str = env!("DSPERSE_DISPLAY_VERSION");

#[derive(Parser)]
#[command(name = "dsperse", about = "Distributed zkML Toolkit", version = VERSION)]
pub struct Cli {
    #[command(subcommand)]
    pub command: Commands,
    #[arg(long, default_value = "warn", global = true)]
    pub log_level: String,
}

#[derive(Subcommand)]
pub enum Commands {
    Slice(SliceArgs),
    Combine(CombineArgs),
    Compile(CompileArgs),
    Run(RunArgs),
    Prove(ProveArgs),
    Verify(VerifyArgs),
    Package(PackageArgs),
    Publish(PublishArgs),
    #[command(name = "full-run")]
    FullRun(FullRunArgs),
    Analyze(AnalyzeArgs),
    #[command(name = "setup-holographic")]
    SetupHolographic(SetupHolographicArgs),
}

pub fn dispatch(command: Commands) -> Result<()> {
    match command {
        Commands::Slice(args) => cmd_slice(args),
        Commands::Combine(args) => cmd_combine(args),
        Commands::Compile(args) => cmd_compile(args),
        Commands::Run(args) => cmd_run(args),
        Commands::Prove(args) => cmd_prove(args),
        Commands::Verify(args) => cmd_verify(args),
        Commands::Package(args) => cmd_package(args),
        Commands::Publish(args) => cmd_publish(args),
        Commands::FullRun(args) => cmd_full_run(args),
        Commands::Analyze(args) => cmd_analyze(args),
        Commands::SetupHolographic(args) => cmd_setup_holographic(args),
    }
}

#[derive(Args)]
pub struct SliceArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub output_dir: Option<PathBuf>,
    #[arg(long, default_value = "512")]
    pub tile_size: Option<usize>,
    #[arg(
        long,
        default_value = "expander",
        help = "Proof system backend (expander or remainder)"
    )]
    pub proof_system: String,
    #[arg(
        long,
        help = "Comma-separated ONNX op names to compile via the proof backend (default: all supported)"
    )]
    pub circuit_ops: Option<String>,
    #[arg(
        long,
        value_delimiter = ',',
        help = "Concrete input shape as comma-separated dims (e.g. 1,3,560,560). Overrides dynamic dimensions."
    )]
    pub input_shape: Option<Vec<i64>>,
}

#[derive(Args)]
pub struct CombineArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
}

#[derive(Args)]
pub struct CompileArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long)]
    pub layers: Option<String>,
    #[arg(long, default_value = "1")]
    pub parallel: NonZeroUsize,
    #[arg(
        long,
        default_value_t = true,
        action = clap::ArgAction::Set,
        help = "Compile circuits with weights as inputs for shared circuit reuse (default: true)"
    )]
    pub weights_as_inputs: bool,
    #[arg(
        long,
        default_value = "expander",
        help = "Proof system backend (expander or remainder)"
    )]
    pub proof_system: String,
    #[arg(
        long,
        help = "Comma-separated ONNX op names to compile via the proof backend (default: all supported)"
    )]
    pub circuit_ops: Option<String>,
    #[arg(
        long = "proof-config",
        visible_alias = "curve",
        default_value = "bn254_raw",
        help = "Proof config: bn254_raw, goldilocks_basefold, goldilocks_ext2_basefold, goldilocks_ext3_whir, goldilocks_ext4_whir. The --curve alias is retained for backward compatibility and will be removed in a future release."
    )]
    pub curve: String,
    #[arg(
        long,
        help = "Skip compilation of slices whose estimated constraint count exceeds this threshold"
    )]
    pub skip_compile_over_size: Option<u64>,
    #[arg(
        long,
        default_value_t = false,
        action = clap::ArgAction::Set,
        help = "Allow the command to exit 0 when individual slices fail to compile.  Failed slices fall back to ONNX execution at run / prove time, producing a partial-coverage proof.  Off by default so CI surfaces real compile regressions."
    )]
    pub allow_onnx_fallback: bool,
    #[arg(
        long,
        default_value_t = false,
        action = clap::ArgAction::Set,
        help = "After compiling each slice, run holographic GKR setup and persist the verifying key as vk.bin in the bundle directory. Requires --proof-config goldilocks_ext4_whir."
    )]
    pub holographic: bool,
}

#[derive(Args)]
pub struct RunArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub input_file: PathBuf,
    #[arg(long)]
    pub run_dir: Option<PathBuf>,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long, default_value = "1")]
    pub parallel: NonZeroUsize,
    #[arg(long)]
    pub batch: bool,
    #[arg(
        long,
        help = "Path to consumer ONNX with fine-tuned weights to inject at inference time"
    )]
    pub weights: Option<PathBuf>,
    #[arg(
        long,
        default_value_t = true,
        action = clap::ArgAction::Set,
        help = "Run inference on combined monolithic ONNX instead of per-slice execution"
    )]
    pub combined: bool,
}

#[derive(Args)]
pub struct ProveArgs {
    #[arg(long)]
    pub run_dir: PathBuf,
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long, default_value = "1")]
    pub parallel: NonZeroUsize,
}

#[derive(Args)]
pub struct VerifyArgs {
    #[arg(long)]
    pub run_dir: PathBuf,
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long, default_value = "1")]
    pub parallel: NonZeroUsize,
}

#[derive(Args)]
pub struct PackageArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long)]
    pub output_dir: Option<PathBuf>,
    #[arg(long)]
    pub author: Option<String>,
    #[arg(long)]
    pub model_version: Option<String>,
    #[arg(long)]
    pub model_name: Option<String>,
    #[arg(long)]
    pub timeout: Option<u64>,
    #[arg(
        long,
        help = "Finite field curve used as domain separator in content hashes (bn254, goldilocks, goldilocks_basefold, goldilocks_ext2, goldilocks_whir, goldilocks_whir_pq)"
    )]
    pub curve: Option<String>,
}

#[derive(Args)]
pub struct PublishArgs {
    #[arg(long, help = "Package directory containing manifest.msgpack")]
    pub dir: PathBuf,
    #[arg(long, help = "Registry base URL")]
    pub url: String,
    #[arg(long, env = "REGISTRY_AUTH_TOKEN", hide_env_values = true)]
    pub auth_token: String,
    #[arg(long)]
    pub name: String,
    #[arg(long, default_value = "")]
    pub description: String,
    #[arg(long)]
    pub author: String,
    #[arg(long, default_value = "1.0.0")]
    pub version: String,
    #[arg(long, default_value = "JSTPROVE")]
    pub proof_system: String,
    #[arg(long, default_value = "3600")]
    pub timeout: u64,
    #[arg(long, default_value_t = false, help = "Activate model after upload")]
    pub activate: bool,
}

#[derive(Args)]
pub struct FullRunArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub input_file: Option<PathBuf>,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long)]
    pub layers: Option<String>,
    #[arg(
        long,
        default_value_t = true,
        action = clap::ArgAction::Set,
        help = "Compile circuits with weights as inputs for shared circuit reuse (default: true)"
    )]
    pub weights_as_inputs: bool,
    #[arg(long, default_value = "1")]
    pub parallel: NonZeroUsize,
    #[arg(long)]
    pub batch: bool,
    #[arg(
        long,
        help = "Path to consumer ONNX with fine-tuned weights to inject at inference time"
    )]
    pub weights: Option<PathBuf>,
    #[arg(
        long,
        default_value = "expander",
        help = "Proof system backend (expander or remainder)"
    )]
    pub proof_system: String,
    #[arg(
        long,
        help = "Comma-separated ONNX op names to compile via the proof backend (default: all supported)"
    )]
    pub circuit_ops: Option<String>,
    #[arg(
        long,
        default_value_t = true,
        action = clap::ArgAction::Set,
        help = "Run inference on combined monolithic ONNX instead of per-slice execution"
    )]
    pub combined: bool,
    #[arg(
        long = "proof-config",
        visible_alias = "curve",
        default_value = "bn254_raw",
        help = "Proof config: bn254_raw, goldilocks_basefold, goldilocks_ext2_basefold, goldilocks_ext3_whir, goldilocks_ext4_whir. The --curve alias is retained for backward compatibility and will be removed in a future release."
    )]
    pub curve: String,
    #[arg(
        long,
        help = "Skip compilation of slices whose estimated constraint count exceeds this threshold"
    )]
    pub skip_compile_over_size: Option<u64>,
    #[arg(
        long,
        default_value_t = false,
        action = clap::ArgAction::Set,
        help = "Allow full-run to proceed when individual slices fail to compile.  Failed slices fall back to ONNX execution, producing a partial-coverage proof.  Off by default so CI surfaces real compile regressions."
    )]
    pub allow_onnx_fallback: bool,
    #[arg(
        long,
        default_value_t = false,
        action = clap::ArgAction::Set,
        help = "After compiling each slice, run holographic GKR setup and persist the verifying key as vk.bin in the bundle directory. Requires --proof-config goldilocks_ext4_whir."
    )]
    pub holographic: bool,
}

#[derive(Args)]
pub struct SetupHolographicArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(long, default_value = "1")]
    pub parallel: NonZeroUsize,
    #[arg(
        long,
        default_value_t = false,
        action = clap::ArgAction::Set,
        help = "Re-run setup and overwrite vk.bin even when the bundle already has one"
    )]
    pub overwrite: bool,
}

struct CircuitOps(Vec<String>);

impl CircuitOps {
    fn as_refs(&self) -> Vec<&str> {
        self.0.iter().map(String::as_str).collect()
    }
}

fn resolve_circuit_ops(proof_system_str: &str, circuit_ops: Option<&str>) -> Result<CircuitOps> {
    let ps: ProofSystem =
        proof_system_str
            .parse()
            .map_err(|e: jstprove_circuits::api::ProofSystemParseError| {
                DsperseError::Other(e.to_string())
            })?;

    let supported = ps.supported_ops();

    let ops = match circuit_ops {
        None => supported.iter().map(|s| (*s).to_string()).collect(),
        Some(spec) => {
            let requested: Vec<String> = spec
                .split(',')
                .map(|s| s.trim().to_string())
                .filter(|s| !s.is_empty())
                .collect();
            if requested.is_empty() {
                return Err(DsperseError::Other(
                    "empty --circuit-ops; provide at least one op or omit the flag to use all supported ops".into(),
                ));
            }
            for op in &requested {
                if !supported.contains(&op.as_str()) {
                    return Err(DsperseError::Other(format!(
                        "op {op:?} is not supported by proof system {ps}. Supported: {supported:?}"
                    )));
                }
            }
            requested
        }
    };
    Ok(CircuitOps(ops))
}

fn resolve_slices_dir(slices_dir: Option<PathBuf>, model_dir: &Path) -> PathBuf {
    slices_dir.unwrap_or_else(|| model_dir.join("slices"))
}

pub fn cmd_slice(args: SliceArgs) -> Result<()> {
    let model_path = args.model_dir.join("model.onnx");
    if !model_path.exists() {
        return Err(DsperseError::Slicer(format!(
            "model.onnx not found in {}",
            args.model_dir.display()
        )));
    }
    let ops = resolve_circuit_ops(&args.proof_system, args.circuit_ops.as_deref())?;
    let metadata = crate::slicer::slice_model(
        &model_path,
        args.output_dir.as_deref(),
        args.tile_size,
        &ops.as_refs(),
        args.input_shape.as_deref(),
    )?;
    tracing::info!(slices = metadata.slices.len(), "slicing complete");
    Ok(())
}

pub fn cmd_combine(args: CombineArgs) -> Result<()> {
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);
    let meta = pipeline::runner::load_model_metadata(&slices_dir)?;
    let path = crate::slicer::combiner::materialize_combined_to_disk(&slices_dir, &meta)?;
    tracing::info!(path = %path.display(), "combined ONNX materialized");
    Ok(())
}

pub fn cmd_compile(args: CompileArgs) -> Result<()> {
    let proof_config = parse_proof_config(&args.curve)?;
    let backend = JstproveBackend::new();
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);

    let layers = args
        .layers
        .as_ref()
        .map(|s| parse_index_spec(s))
        .transpose()?;

    let ops = resolve_circuit_ops(&args.proof_system, args.circuit_ops.as_deref())?;

    let report = pipeline::compile_slices(
        &slices_dir,
        &backend,
        proof_config,
        args.parallel.get(),
        args.weights_as_inputs,
        layers.as_deref(),
        &ops.as_refs(),
        args.skip_compile_over_size,
        args.holographic,
    )?;
    if args.allow_onnx_fallback {
        Ok(())
    } else {
        report.ok_if_no_failures().map(|_| ())
    }
}

pub fn cmd_run(args: RunArgs) -> Result<()> {
    if !args.input_file.is_file() {
        return Err(DsperseError::Other(format!(
            "input file not found: {}",
            args.input_file.display()
        )));
    }

    let backend = JstproveBackend::new();
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);

    let run_dir = args
        .run_dir
        .unwrap_or_else(|| args.model_dir.join("run").join(format!("run_{}", run_id())));

    let config = RunConfig {
        parallel: args.parallel.get(),
        batch: args.batch,
        weights_onnx: args.weights,
        combined: args.combined,
    };

    pipeline::run_inference(&slices_dir, &args.input_file, &run_dir, &backend, &config)?;
    Ok(())
}

pub fn cmd_prove(args: ProveArgs) -> Result<()> {
    let backend = JstproveBackend::new();
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);

    pipeline::prove_run(&args.run_dir, &slices_dir, &backend, args.parallel.get())?;
    Ok(())
}

pub fn cmd_verify(args: VerifyArgs) -> Result<()> {
    let backend = JstproveBackend::new();
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);

    pipeline::verify_run(&args.run_dir, &slices_dir, &backend, args.parallel.get())?;
    Ok(())
}

pub fn cmd_package(args: PackageArgs) -> Result<()> {
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);
    let output_dir = args
        .output_dir
        .unwrap_or_else(|| args.model_dir.join("package"));

    let config = pipeline::packager::PackageConfig {
        output_dir,
        author: args.author,
        model_version: args.model_version,
        model_name: args.model_name,
        timeout: args.timeout,
        curve: args.curve,
    };

    let result = pipeline::packager::package_content_addressed(&slices_dir, &config)?;

    tracing::info!(
        components = result.component_count,
        weight_biases = result.wb_count,
        total_bytes = result.total_size,
        manifest = %result.manifest_path.display(),
        "content-addressed packaging complete"
    );

    Ok(())
}

pub fn cmd_publish(args: PublishArgs) -> Result<()> {
    let config = pipeline::publisher::PublishConfig {
        api_url: args.url,
        auth_token: args.auth_token,
        name: args.name,
        description: args.description,
        author: args.author,
        version: args.version,
        proof_system: args.proof_system,
        timeout: args.timeout,
        activate: args.activate,
    };

    let result = match pipeline::publisher::publish(&args.dir, &config) {
        Ok(r) => r,
        Err(e) => {
            tracing::error!(error = %e, "publish failed");
            return Err(e);
        }
    };

    tracing::info!(
        model_id = %result.model_id,
        components_uploaded = result.components_uploaded,
        components_skipped = result.components_skipped,
        weights_uploaded = result.weights_uploaded,
        weights_skipped = result.weights_skipped,
        "publish complete"
    );

    Ok(())
}

pub fn cmd_full_run(args: FullRunArgs) -> Result<()> {
    let proof_config = parse_proof_config(&args.curve)?;
    let backend = JstproveBackend::new();

    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);

    let input_file = args
        .input_file
        .unwrap_or_else(|| args.model_dir.join(crate::utils::paths::INPUT_FILE));

    if !input_file.is_file() {
        return Err(DsperseError::Other(format!(
            "input file not found: {}",
            input_file.display()
        )));
    }

    if args.weights.is_some() && !args.weights_as_inputs {
        return Err(DsperseError::Other(
            "--weights requires --weights-as-inputs during compilation".into(),
        ));
    }

    let layers = args
        .layers
        .as_ref()
        .map(|s| parse_index_spec(s))
        .transpose()?;

    let ops = resolve_circuit_ops(&args.proof_system, args.circuit_ops.as_deref())?;

    tracing::info!("compiling slices");
    let report = pipeline::compile_slices(
        &slices_dir,
        &backend,
        proof_config,
        args.parallel.get(),
        args.weights_as_inputs,
        layers.as_deref(),
        &ops.as_refs(),
        args.skip_compile_over_size,
        args.holographic,
    )?;
    if !args.allow_onnx_fallback {
        report.ok_if_no_failures()?;
    }

    let run_dir = args.model_dir.join("run").join(format!("run_{}", run_id()));

    let config = RunConfig {
        parallel: args.parallel.get(),
        batch: args.batch,
        weights_onnx: args.weights,
        combined: args.combined,
    };

    tracing::info!("running inference");
    pipeline::run_inference(&slices_dir, &input_file, &run_dir, &backend, &config)?;

    tracing::info!("proving");
    pipeline::prove_run(&run_dir, &slices_dir, &backend, args.parallel.get())?;

    tracing::info!("verifying");
    pipeline::verify_run(&run_dir, &slices_dir, &backend, args.parallel.get())?;

    tracing::info!(run_dir = %run_dir.display(), "full run complete");
    Ok(())
}

pub fn cmd_setup_holographic(args: SetupHolographicArgs) -> Result<()> {
    let backend = JstproveBackend::new();
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);

    let report = pipeline::setup_holographic_for_slices(
        &slices_dir,
        &backend,
        args.parallel.get(),
        args.overwrite,
    )?;

    tracing::info!(
        processed = report.processed,
        skipped = report.skipped_already_present,
        failed = report.failed.len(),
        "holographic setup complete"
    );

    report.ok_if_no_failures().map(|_| ())
}

#[derive(Args)]
pub struct AnalyzeArgs {
    #[arg(long)]
    pub model_dir: PathBuf,
    #[arg(long)]
    pub slices_dir: Option<PathBuf>,
    #[arg(
        long,
        default_value = "expander",
        help = "Proof system backend (expander or remainder)"
    )]
    pub proof_system: String,
    #[arg(
        long,
        help = "Comma-separated ONNX op names to compile via the proof backend"
    )]
    pub circuit_ops: Option<String>,
    #[arg(
        long,
        help = "Skip slices whose estimated constraint count exceeds this"
    )]
    pub skip_compile_over_size: Option<u64>,
    #[arg(
        long = "proof-config",
        visible_alias = "curve",
        default_value = "bn254_raw",
        help = "Proof config for circuit signature computation"
    )]
    pub proof_config: String,
    #[arg(
        long,
        default_value_t = AnalyzeFormat::Table,
        value_enum,
        help = "Output format"
    )]
    pub format: AnalyzeFormat,
}

#[derive(Clone, Copy, Debug, PartialEq, Eq, clap::ValueEnum)]
pub enum AnalyzeFormat {
    Table,
    Json,
}

impl std::fmt::Display for AnalyzeFormat {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Self::Table => f.write_str("table"),
            Self::Json => f.write_str("json"),
        }
    }
}

fn cmd_analyze(args: AnalyzeArgs) -> Result<()> {
    let slices_dir = resolve_slices_dir(args.slices_dir, &args.model_dir);
    let ops = resolve_circuit_ops(&args.proof_system, args.circuit_ops.as_deref())?;
    // Validate proof_config through the same parser cmd_compile and
    // cmd_full_run use so a typo in --proof-config fails fast with a
    // "unknown proof config 'foo'" message rather than silently
    // producing signatures under an unintended curve.
    let proof_config = parse_proof_config(&args.proof_config)?;
    let proof_config_name = proof_config.to_string();

    let reports = pipeline::analyze_slices(
        &slices_dir,
        &ops.as_refs(),
        args.skip_compile_over_size,
        Some(proof_config_name.as_str()),
    )?;

    if matches!(args.format, AnalyzeFormat::Json) {
        println!(
            "{}",
            serde_json::to_string_pretty(&reports)
                .map_err(|e| DsperseError::Other(e.to_string()))?
        );
    } else {
        let hdr_ops = "OPS";
        println!(
            "{:<8} {:<10} {:<28} {:<14} {:<6} {:<6} {:<6} {:<12} {hdr_ops}",
            "SLICE", "BACKEND", "REASON", "EST.CONSTR", "TILED", "CHSPL", "DMSPL", "SIGNATURE"
        );
        println!("{}", "-".repeat(120));

        let mut jstprove_count = 0usize;
        let mut onnx_count = 0usize;
        let mut missing_count = 0usize;
        let mut total_constraints: u64 = 0;
        let mut unique_sigs: std::collections::HashSet<String> = std::collections::HashSet::new();

        for r in &reports {
            let est = r
                .estimated_constraints
                .map(|c| format!("{c}"))
                .unwrap_or_default();
            let sig = r
                .circuit_signature
                .as_deref()
                .map(|s| &s[..12.min(s.len())])
                .unwrap_or("");
            println!(
                "{:<8} {:<10} {:<28} {:<14} {:<6} {:<6} {:<6} {:<12} {}",
                r.index,
                r.backend,
                r.reason,
                est,
                r.tiled,
                r.channel_split,
                r.dim_split,
                sig,
                r.ops,
            );
            match r.backend.as_str() {
                "jstprove" => jstprove_count += 1,
                "onnx" => onnx_count += 1,
                "missing" => missing_count += 1,
                other => {
                    tracing::warn!(
                        slice = r.index,
                        backend = other,
                        "analyze: unknown backend classification; not counted"
                    );
                }
            }
            if let Some(c) = r.estimated_constraints {
                total_constraints += c;
            }
            if let Some(ref s) = r.circuit_signature {
                unique_sigs.insert(s.clone());
            }
        }

        println!("{}", "-".repeat(120));
        println!(
            "total: {} slices | jstprove: {} | onnx: {} | missing: {} | unique circuits: {} | total constraints: {}",
            reports.len(),
            jstprove_count,
            onnx_count,
            missing_count,
            unique_sigs.len(),
            total_constraints,
        );
    }

    Ok(())
}

fn parse_index_spec(spec: &str) -> Result<Vec<usize>> {
    let mut layers = Vec::new();
    for part in spec.split(',') {
        let part = part.trim();
        if part.is_empty() {
            continue;
        }
        if let Some((start, end)) = part.split_once('-') {
            let s: usize = start.trim().parse().map_err(|_| {
                DsperseError::Other(format!("invalid index spec range start: {start:?}"))
            })?;
            let e: usize = end.trim().parse().map_err(|_| {
                DsperseError::Other(format!("invalid index spec range end: {end:?}"))
            })?;
            if s > e {
                return Err(DsperseError::Other(format!(
                    "invalid index spec range: start {s} > end {e}"
                )));
            }
            layers.extend(s..=e);
        } else {
            let n: usize = part
                .parse()
                .map_err(|_| DsperseError::Other(format!("invalid index spec token: {part:?}")))?;
            layers.push(n);
        }
    }
    if layers.is_empty() {
        return Err(DsperseError::Other("empty index spec".into()));
    }
    Ok(layers)
}

fn run_id() -> String {
    let now = std::time::SystemTime::now()
        .duration_since(std::time::UNIX_EPOCH)
        .unwrap_or_default();
    let uuid = uuid::Uuid::new_v4();
    format!("{}_{}", now.as_secs(), uuid.as_simple())
}

#[cfg(test)]
mod tests {
    use super::*;
    use clap::Parser;

    #[test]
    fn parse_index_spec_single() {
        assert_eq!(parse_index_spec("3").unwrap(), vec![3]);
    }

    #[test]
    fn parse_index_spec_multiple() {
        assert_eq!(parse_index_spec("1,3,5").unwrap(), vec![1, 3, 5]);
    }

    #[test]
    fn parse_index_spec_range() {
        assert_eq!(parse_index_spec("2-5").unwrap(), vec![2, 3, 4, 5]);
    }

    #[test]
    fn parse_index_spec_mixed() {
        assert_eq!(parse_index_spec("0,2-4,7").unwrap(), vec![0, 2, 3, 4, 7]);
    }

    #[test]
    fn parse_index_spec_whitespace_tolerance() {
        assert_eq!(parse_index_spec(" 1 , 2 - 3 ").unwrap(), vec![1, 2, 3]);
    }

    #[test]
    fn parse_index_spec_empty_rejected() {
        assert!(parse_index_spec("").is_err());
    }

    #[test]
    fn parse_index_spec_invalid_token() {
        assert!(parse_index_spec("abc").is_err());
    }

    #[test]
    fn parse_index_spec_reversed_range() {
        assert!(parse_index_spec("5-2").is_err());
    }

    #[test]
    fn parse_index_spec_trailing_comma() {
        assert_eq!(parse_index_spec("1,2,").unwrap(), vec![1, 2]);
    }

    #[test]
    fn run_id_format() {
        let id = run_id();
        let parts: Vec<&str> = id.splitn(2, '_').collect();
        assert_eq!(parts.len(), 2);
        assert!(parts[0].parse::<u64>().is_ok());
        assert_eq!(parts[1].len(), 32);
    }

    #[test]
    fn run_id_unique() {
        let id1 = run_id();
        let id2 = run_id();
        assert_ne!(id1, id2);
    }

    #[test]
    fn cli_parse_slice_command() {
        let cli = Cli::parse_from(["dsperse", "slice", "--model-dir", "/tmp/model"]);
        assert!(matches!(cli.command, Commands::Slice(_)));
    }

    #[test]
    fn cli_parse_run_command() {
        let cli = Cli::parse_from([
            "dsperse",
            "run",
            "--model-dir",
            "/tmp/model",
            "--input-file",
            "/tmp/input.json",
        ]);
        assert!(matches!(cli.command, Commands::Run(_)));
    }

    #[test]
    fn cli_log_level_default() {
        let cli = Cli::parse_from(["dsperse", "slice", "--model-dir", "/tmp"]);
        assert_eq!(cli.log_level, "warn");
    }

    #[test]
    fn cli_log_level_override() {
        let cli = Cli::parse_from([
            "dsperse",
            "--log-level",
            "debug",
            "slice",
            "--model-dir",
            "/tmp",
        ]);
        assert_eq!(cli.log_level, "debug");
    }

    #[test]
    fn cli_compile_with_layers() {
        let cli = Cli::parse_from([
            "dsperse",
            "compile",
            "--model-dir",
            "/tmp",
            "--layers",
            "0,2-4",
        ]);
        if let Commands::Compile(args) = cli.command {
            assert_eq!(args.layers.as_deref(), Some("0,2-4"));
        } else {
            panic!("expected Compile");
        }
    }

    #[test]
    fn cli_run_parallel() {
        let cli = Cli::parse_from([
            "dsperse",
            "run",
            "--model-dir",
            "/tmp",
            "--input-file",
            "/tmp/in.json",
            "--parallel",
            "4",
        ]);
        if let Commands::Run(args) = cli.command {
            assert_eq!(args.parallel.get(), 4);
        } else {
            panic!("expected Run");
        }
    }

    #[test]
    fn cli_slice_with_tile_size() {
        let cli = Cli::parse_from([
            "dsperse",
            "slice",
            "--model-dir",
            "/tmp",
            "--tile-size",
            "1024",
        ]);
        if let Commands::Slice(args) = cli.command {
            assert_eq!(args.tile_size, Some(1024));
        } else {
            panic!("expected Slice");
        }
    }

    #[test]
    fn cli_parse_combine_command() {
        let cli = Cli::parse_from(["dsperse", "combine", "--model-dir", "/tmp/model"]);
        assert!(matches!(cli.command, Commands::Combine(_)));
    }

    #[test]
    fn cli_parse_combine_with_slices_dir() {
        let cli = Cli::parse_from([
            "dsperse",
            "combine",
            "--model-dir",
            "/tmp/model",
            "--slices-dir",
            "/tmp/slices",
        ]);
        if let Commands::Combine(args) = cli.command {
            assert_eq!(
                args.slices_dir,
                Some(std::path::PathBuf::from("/tmp/slices"))
            );
        } else {
            panic!("expected Combine");
        }
    }

    #[test]
    fn cli_run_combined_default_true() {
        let cli = Cli::parse_from([
            "dsperse",
            "run",
            "--model-dir",
            "/tmp",
            "--input-file",
            "/tmp/in.json",
        ]);
        if let Commands::Run(args) = cli.command {
            assert!(args.combined);
        } else {
            panic!("expected Run");
        }
    }

    #[test]
    fn cli_run_combined_explicit_false() {
        let cli = Cli::parse_from([
            "dsperse",
            "run",
            "--model-dir",
            "/tmp",
            "--input-file",
            "/tmp/in.json",
            "--combined",
            "false",
        ]);
        if let Commands::Run(args) = cli.command {
            assert!(!args.combined);
        } else {
            panic!("expected Run");
        }
    }

    #[test]
    fn cli_compile_holographic_default_false() {
        let cli = Cli::parse_from(["dsperse", "compile", "--model-dir", "/tmp"]);
        if let Commands::Compile(args) = cli.command {
            assert!(!args.holographic);
        } else {
            panic!("expected Compile");
        }
    }

    #[test]
    fn cli_compile_holographic_explicit_true() {
        let cli = Cli::parse_from([
            "dsperse",
            "compile",
            "--model-dir",
            "/tmp",
            "--holographic",
            "true",
        ]);
        if let Commands::Compile(args) = cli.command {
            assert!(args.holographic);
        } else {
            panic!("expected Compile");
        }
    }

    #[test]
    fn cli_full_run_holographic_explicit_true() {
        let cli = Cli::parse_from([
            "dsperse",
            "full-run",
            "--model-dir",
            "/tmp",
            "--holographic",
            "true",
        ]);
        if let Commands::FullRun(args) = cli.command {
            assert!(args.holographic);
        } else {
            panic!("expected FullRun");
        }
    }

    #[test]
    fn cli_setup_holographic_command() {
        let cli = Cli::parse_from([
            "dsperse",
            "setup-holographic",
            "--model-dir",
            "/tmp",
            "--parallel",
            "4",
        ]);
        if let Commands::SetupHolographic(args) = cli.command {
            assert_eq!(args.parallel.get(), 4);
            assert!(!args.overwrite);
        } else {
            panic!("expected SetupHolographic");
        }
    }

    #[test]
    fn cli_setup_holographic_overwrite() {
        let cli = Cli::parse_from([
            "dsperse",
            "setup-holographic",
            "--model-dir",
            "/tmp",
            "--overwrite",
            "true",
        ]);
        if let Commands::SetupHolographic(args) = cli.command {
            assert!(args.overwrite);
        } else {
            panic!("expected SetupHolographic");
        }
    }

    #[test]
    fn cli_compile_wai_default_true() {
        let cli = Cli::parse_from(["dsperse", "compile", "--model-dir", "/tmp"]);
        if let Commands::Compile(args) = cli.command {
            assert!(args.weights_as_inputs);
        } else {
            panic!("expected Compile");
        }
    }

    #[test]
    fn cli_compile_wai_explicit_false() {
        let cli = Cli::parse_from([
            "dsperse",
            "compile",
            "--model-dir",
            "/tmp",
            "--weights-as-inputs",
            "false",
        ]);
        if let Commands::Compile(args) = cli.command {
            assert!(!args.weights_as_inputs);
        } else {
            panic!("expected Compile");
        }
    }

    #[test]
    fn resolve_circuit_ops_invalid_proof_system() {
        let result = resolve_circuit_ops("nonexistent", None);
        assert!(result.is_err());
    }

    #[test]
    fn resolve_circuit_ops_unsupported_op() {
        let result = resolve_circuit_ops("expander", Some("FakeOp"));
        assert!(result.is_err());
    }

    #[test]
    fn resolve_circuit_ops_empty_spec_rejected() {
        let result = resolve_circuit_ops("expander", Some(""));
        assert!(result.is_err());
    }

    #[test]
    fn resolve_circuit_ops_whitespace_only_spec_rejected() {
        let result = resolve_circuit_ops("expander", Some(" ,  , "));
        assert!(result.is_err());
    }

    #[test]
    fn resolve_circuit_ops_valid_specific_ops() {
        let supported = ProofSystem::Expander.supported_ops();
        assert!(!supported.is_empty());
        let first_op = supported[0];
        let ops = resolve_circuit_ops("expander", Some(first_op)).unwrap();
        assert_eq!(ops.as_refs(), vec![first_op]);
    }

    #[test]
    fn resolve_circuit_ops_none_returns_all() {
        let ops = resolve_circuit_ops("expander", None).unwrap();
        let expected: Vec<&str> = ProofSystem::Expander.supported_ops().to_vec();
        assert_eq!(ops.as_refs(), expected);
    }

    #[test]
    fn resolve_slices_dir_custom_path() {
        let result = resolve_slices_dir(Some(PathBuf::from("/custom")), Path::new("/model"));
        assert_eq!(result, PathBuf::from("/custom"));
    }

    #[test]
    fn resolve_slices_dir_default_fallback() {
        let model_dir = Path::new("/model");
        let result = resolve_slices_dir(None, model_dir);
        assert_eq!(result, model_dir.join("slices"));
    }
}


================================================
FILE: crates/dsperse/src/converter.rs
================================================
use std::collections::{HashMap, HashSet};
use std::path::Path;

use jstprove_circuits::api::{
    self, ArchitectureType as Architecture, CircuitParamsType as CircuitParams, WANDBType as WANDB,
};

use crate::error::{DsperseError, Result};

pub fn prepare_jstprove_artifacts(
    onnx_path: &Path,
    weights_as_inputs: bool,
) -> Result<(CircuitParams, Architecture, WANDB)> {
    prepare_jstprove_artifacts_filtered(onnx_path, weights_as_inputs, &HashSet::new(), None)
}

pub fn prepare_jstprove_artifacts_filtered(
    onnx_path: &Path,
    weights_as_inputs: bool,
    exclude_from_wai: &HashSet<String>,
    traced_shapes: Option<&HashMap<String, Vec<i64>>>,
) -> Result<(CircuitParams, Architecture, WANDB)> {
    let meta = match traced_shapes {
        Some(shapes) => {
            let converted: HashMap<String, Vec<usize>> = shapes
                .iter()
                .map(|(k, v)| {
                    (
                        k.clone(),
                        v.iter()
                            .map(|&d| if d < 0 { 1 } else { d as usize })
                            .collect(),
                    )
                })
                .collect();
            api::generate_metadata_with_shapes(onnx_path, converted)
        }
        None => api::generate_metadata(onnx_path),
    }
    .map_err(|e| DsperseError::Pipeline(format!("ONNX metadata generation: {e:#}")))?;

    let mut params = meta.circuit_params;
    if weights_as_inputs {
        api::populate_wai_inputs(&mut params, &meta.wandb, exclude_from_wai)
            .map_err(|e| DsperseError::Pipeline(format!("WAI input population: {e}")))?;
    }

    Ok((params, meta.architecture, meta.wandb))
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn prepare_jstprove_artifacts_nonexistent_model() {
        let result = prepare_jstprove_artifacts(Path::new("/nonexistent.onnx"), false);
        assert!(result.is_err());
    }

    #[test]
    fn prepare_jstprove_artifacts_with_weights_as_inputs() {
        let result = prepare_jstprove_artifacts(Path::new("/nonexistent.onnx"), true);
        assert!(result.is_err());
    }
}


================================================
FILE: crates/dsperse/src/error.rs
================================================
use std::path::PathBuf;

pub type Result<T> = std::result::Result<T, DsperseError>;

#[derive(Debug, thiserror::Error)]
pub enum DsperseError {
    #[error("I/O error at {}: {source}", .path.file_name().and_then(|n| n.to_str()).unwrap_or("<unknown>"))]
    Io {
        source: std::io::Error,
        path: PathBuf,
    },

    #[error("msgpack encode error: {0}")]
    MsgpackEncode(#[from] rmp_serde::encode::Error),

    #[error("msgpack decode error: {0}")]
    MsgpackDecode(#[from] rmp_serde::decode::Error),

    #[error("ONNX error: {0}")]
    Onnx(String),

    #[error("backend error: {0}")]
    Backend(String),

    #[error("slicer error: {0}")]
    Slicer(String),

    #[error("archive error: {0}")]
    Archive(String),

    #[error("metadata error: {0}")]
    Metadata(String),

    #[error("pipeline error: {0}")]
    Pipeline(String),

    #[error("{0}")]
    Other(String),
}

impl DsperseError {
    pub fn io(source: std::io::Error, path: impl Into<PathBuf>) -> Self {
        Self::Io {
            source,
            path: path.into(),
        }
    }
}


================================================
FILE: crates/dsperse/src/lib.rs
================================================
pub mod backend;
pub mod cli;
pub mod converter;
pub mod error;
pub mod pipeline;
pub mod schema;
pub mod slicer;
pub mod utils;
pub mod version;

#[cfg(feature = "python")]
mod python;


================================================
FILE: crates/dsperse/src/main.rs
================================================
use clap::Parser;
use tracing_subscriber::EnvFilter;

use dsperse::cli;

fn main() {
    let parsed = cli::Cli::parse();

    tracing_subscriber::fmt()
        .with_env_filter(
            EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(&parsed.log_level)),
        )
        .init();

    eprintln!("dsperse {}", cli::VERSION);

    if let Err(e) = cli::dispatch(parsed.command) {
        tracing::error!("{e}");
        std::process::exit(1);
    }
}


================================================
FILE: crates/dsperse/src/pipeline/channel_split.rs
================================================
use std::collections::HashMap;
use std::path::Path;

use ndarray::{Array4, ArrayD, s};

use super::runner::{generate_wai_witness, resolve_circuit_path_optional, run_onnx_inference};
use super::tensor_store::TensorStore;
use crate::backend::jstprove::JstproveBackend;
use crate::error::{DsperseError, Result};
use crate::schema::execution::{ExecutionInfo, ExecutionMethod};
use crate::schema::tiling::{ChannelGroupInfo, ChannelSplitInfo};
use crate::slicer::onnx_proto::TensorProto;
use crate::utils::io::read_msgpack;
use crate::utils::paths::resolve_relative_path;

pub(crate) fn reshape_channel_split_output(
    arr: ArrayD<f64>,
    target_shape: Option<&[i64]>,
) -> Result<ArrayD<f64>> {
    let Some(raw) = target_shape else {
        return Ok(arr);
    };
    let target: Vec<usize> = raw
        .iter()
        .map(|&d| {
            usize::try_from(d).map_err(|_| {
                DsperseError::Pipeline(format!("negative dimension {d} in output_shape"))
            })
        })
        .collect::<Result<Vec<_>>>()?;
    if arr.shape() == target.as_slice() {
        return Ok(arr);
    }
    let actual_shape: Vec<usize> = arr.shape().to_vec();
    let actual_elems: usize = actual_shape.iter().product();
    let target_elems: usize = target.iter().product();
    if actual_elems != target_elems {
        return Err(DsperseError::Pipeline(format!(
            "channel_split output element count mismatch: \
             actual {actual_elems} (shape {actual_shape:?}) vs target {target_elems} (shape {target:?})"
        )));
    }
    arr.into_shape_with_order(ndarray::IxDyn(&target))
        .map_err(|e| {
            DsperseError::Pipeline(format!(
                "channel_split output reshape from {actual_shape:?} to {target:?}: {e}",
            ))
        })
}

#[allow(clippy::too_many_arguments)]
pub(crate) fn execute_channel_split(
    slices_dir: &Path,
    slice_run_dir: &Path,
    slice_id: &str,
    cs: &ChannelSplitInfo,
    target_shape: Option<&[i64]>,
    tensor_cache: &TensorStore,
    backend: &JstproveBackend,
    donor_init_map: Option<&HashMap<String, &TensorProto>>,
) -> Result<crate::schema::execution::StrategyOutput> {
    let input_arr = tensor_cache.get(&cs.input_name)?.clone();

    let (input_4d, n, h) = if input_arr.ndim() == 4 {
        let s = input_arr.shape();
        let n = s[0];
        if n != 1 {
            return Err(DsperseError::Pipeline(format!(
                "channel split: batch size {n} not supported, expected 1"
            )));
        }
        let h = s[2];
        let arr =
            Array4::from_shape_vec((n, s[1], s[2], s[3]), input_arr.iter().copied().collect())
                .map_err(|e| DsperseError::Pipeline(format!("channel split reshape: {e}")))?;
        (arr, n, h)
    } else {
        let n = 1usize;
        let input_flat: Vec<f64> = input_arr.iter().copied().collect();
        let total_elements = input_flat.len();
        let nc = n * cs.c_in;
        if nc > 0 && !total_elements.is_multiple_of(nc) {
            return Err(DsperseError::Pipeline(format!(
                "channel split reshape: total_elements {total_elements} not divisible by n*c_in ({nc})"
            )));
        }
        let spatial = if cs.c_in > 0 && total_elements > 0 {
            total_elements / nc
        } else {
            cs.h * cs.w
        };
        let h = cs.h.max(1);
        if spatial > 0 && h > 0 && spatial % h != 0 {
            return Err(DsperseError::Pipeline(format!(
                "channel split reshape: spatial {spatial} not divisible by h={h}"
            )));
        }
        let w = if spatial > 0 && h > 0 {
            spatial / h
        } else {
            cs.w.max(1)
        };
        let arr = Array4::from_shape_vec((n, cs.c_in, h, w), input_flat)
            .map_err(|e| DsperseError::Pipeline(format!("channel split reshape: {e}")))?;
        (arr, n, h)
    };

    let mut accumulated: Option<Array4<f64>> = None;

    tracing::info!(
        slice = %slice_id,
        num_groups = cs.groups.len(),
        "channel split execution"
    );

    let n_channels = input_4d.shape()[1];
    for group in &cs.groups {
        if group.c_end > n_channels || group.c_start > group.c_end {
            return Err(DsperseError::Pipeline(format!(
                "channel group {} bounds [{}, {}) exceed channel dimension {}",
                group.group_idx, group.c_start, group.c_end, n_channels
            )));
        }
        let group_input = input_4d
            .slice(s![.., group.c_start..group.c_end, .., ..])
            .to_owned();
        let group_input_dyn = group_input.into_dyn();

        let group_dir = slice_run_dir.join(format!("group_{}", group.group_idx));
        std::fs::create_dir_all(&group_dir).map_err(|e| DsperseError::io(e, &group_dir))?;

        let group_output = execute_channel_group(
            slices_dir,
            &group_dir,
            group,
            &group_input_dyn,
            backend,
            donor_init_map,
        )?;

        let group_4d = if group_output.ndim() == 4 {
            let s = group_output.shape();
            Array4::from_shape_vec(
                (s[0], s[1], s[2], s[3]),
                group_output.iter().copied().collect(),
            )
            .map_err(|e| DsperseError::Pipeline(format!("group output reshape: {e}")))?
        } else {
            let group_flat: Vec<f64> = group_output.iter().copied().collect();
            let (out_h, out_w) = if cs.out_h > 0 && cs.out_w > 0 {
                (cs.out_h, cs.out_w)
            } else if cs.c_out > 0 {
                let out_spatial = group_flat.len() / (n * cs.c_out);
                if h > 0 && out_spatial > 0 && out_spatial.is_multiple_of(h) {
                    (h, out_spatial / h)
                } else {
                    return Err(DsperseError::Pipeline(format!(
                        "cannot determine spatial layout for channel_split output: {} elements, c_out={}, set out_h/out_w in metadata",
                        group_flat.len(),
                        cs.c_out
                    )));
                }
            } else {
                return Err(DsperseError::Pipeline("channel split c_out is 0".into()));
            };
            if n * cs.c_out * out_h * out_w != group_flat.len() {
                return Err(DsperseError::Pipeline(format!(
                    "group output reshape mismatch: expected {} elements (n={}, c_out={}, h={}, w={}), got {}",
                    n * cs.c_out * out_h * out_w,
                    n,
                    cs.c_out,
                    out_h,
                    out_w,
                    group_flat.len()
                )));
            }
            Array4::from_shape_vec((n, cs.c_out, out_h, out_w), group_flat)
                .map_err(|e| DsperseError::Pipeline(format!("group output reshape: {e}")))?
        };

        accumulated = Some(match accumulated {
            Some(acc) => {
                if acc.shape() != group_4d.shape() {
                    return Err(DsperseError::Pipeline(format!(
                        "channel group {} shape {:?} does not match accumulator shape {:?}",
                        group.group_idx,
                        group_4d.shape(),
                        acc.shape()
                    )));
                }
                acc + &group_4d
            }
            None => group_4d,
        });
    }

    if let Some(ref bias_path_str) = cs.bias_path {
        let bias_file = resolve_relative_path(slices_dir, bias_path_str)?;
        if !bias_file.exists() {
            return Err(DsperseError::Pipeline(format!(
                "configured bias file not found: {} (bias_path={bias_path_str})",
                bias_file.display()
            )));
        }
        let bias_data = read_msgpack(&bias_file)?;
        let bias_flat = crate::utils::io::flatten_nested_list(&bias_data);
        if bias_flat.len() != cs.c_out {
            return Err(DsperseError::Pipeline(format!(
                "bias length {} does not match c_out {}",
                bias_flat.len(),
                cs.c_out
            )));
        }
        if let Some(ref mut acc) = accumulated {
            for ((_, c, _, _), val) in acc.indexed_iter_mut() {
                *val += bias_flat[c];
            }
        }
    }

    let output = match accumulated {
        Some(acc) => reshape_channel_split_output(acc.into_dyn(), target_shape)?,
        None => {
            return Err(DsperseError::Pipeline(format!(
                "channel_split produced no output for '{}'",
                cs.output_name
            )));
        }
    };

    Ok(crate::schema::execution::StrategyOutput {
        info: ExecutionInfo {
            method: ExecutionMethod::ChannelSplit,
            success: true,
            error: None,
            witness_file: None,
            tile_exec_infos: Vec::new(),
        },
        outputs: vec![(cs.output_name.clone(), output)],
    })
}

fn execute_channel_group(
    slices_dir: &Path,
    group_dir: &Path,
    group: &ChannelGroupInfo,
    group_input: &ArrayD<f64>,
    backend: &JstproveBackend,
    donor_init_map: Option<&HashMap<String, &TensorProto>>,
) -> Result<ArrayD<f64>> {
    let onnx_path = resolve_relative_path(slices_dir, &group.path)?;

    let patched_onnx = if let Some(map) = donor_init_map {
        Some(crate::slicer::onnx_proto::build_patched_onnx(
            &onnx_path, map,
        )?)
    } else {
        None
    };
    let effective_onnx = patched_onnx
        .as_ref()
        .map_or(onnx_path.as_path(), |t| t.path());

    if let Some(circuit_path) =
        resolve_circuit_path_optional(slices_dir, group.jstprove_circuit_path.as_deref())?
    {
        let params = backend.load_params(&circuit_path)?;
        let is_wai = params.as_ref().is_some_and(|p| p.weights_as_inputs);

        if donor_init_map.is_some() && !is_wai {
            return Err(DsperseError::Pipeline(format!(
                "group_{}: consumer weights require circuits compiled with --weights-as-inputs",
                group.group_idx
            )));
        }

        let output_tensor = run_onnx_inference(effective_onnx, group_input)?;

        let flat: Vec<f64> = group_input.iter().copied().collect();
        let witness_bytes = if is_wai {
            generate_wai_witness(
                backend,
                &circuit_path,
                &onnx_path,
                donor_init_map,
                params.as_ref().unwrap(),
                &flat,
            )?
        } else {
            backend.witness_f64(&circuit_path, &flat, &[])?
        };

        let witness_path = group_dir.join(crate::utils::paths::WITNESS_FILE);
        std::fs::write(&witness_path, &witness_bytes)
            .map_err(|e| DsperseError::io(e, &witness_path))?;

        Ok(output_tensor)
    } else {
        run_onnx_inference(effective_onnx, group_input)
    }
}


================================================
FILE: crates/dsperse/src/pipeline/combined.rs
================================================
use std::collections::{HashMap, HashSet};
use std::path::{Path, PathBuf};

use ndarray::{ArrayD, IxDyn};

use super::incremental::SliceWork;
use super::runner::{build_execution_chain, build_run_metadata, load_model_metadata};
use super::strategy::ExecutionStrategy;
use super::tensor_store::TensorStore;
use crate::backend::onnx::NamedOutputs;
use crate::error::{DsperseError, Result};
use crate::schema::execution::{ExecutionChain, RunMetadata};
use crate::schema::metadata::ModelMetadata;

pub struct CombinedRun {
    tensor_cache: TensorStore,
    model_meta: ModelMetadata,
    run_meta: RunMetadata,
    execution_chain: ExecutionChain,
    slices_dir: PathBuf,
    pending_slices: HashSet<String>,
    failed_slices: HashSet<String>,
}

impl CombinedRun {
    pub fn new(slices_dir: &Path, input: ArrayD<f64>) -> Result<Self> {
        let model_meta = load_model_metadata(slices_dir)?;

        let combined_path =
            crate::slicer::combiner::ensure_combined_materialized(slices_dir, &model_meta)?;

        crate::slicer::materializer::ensure_all_slices_materialized(slices_dir, &model_meta)?;

        let first_slice = model_meta
            .slices
            .first()
            .ok_or_else(|| DsperseError::Pipeline("model has no slices".into()))?;
        let declared_inputs = &first_slice.dependencies.filtered_inputs;
        if declared_inputs.is_empty() {
            return Err(DsperseError::Pipeline(
                "first slice has no input dependency".into(),
            ));
        }

        let named_outputs = run_combined_onnx(&combined_path, &input, declared_inputs)?;

        let mut tensor_cache = TensorStore::new();
        for (name, (data, shape)) in &named_outputs {
            let arr = ArrayD::from_shape_vec(IxDyn(shape), data.clone())
                .map_err(|e| DsperseError::Pipeline(format!("output reshape '{name}': {e}")))?;
            tensor_cache.put(name.clone(), arr);
        }
        for name in declared_inputs {
            if !tensor_cache.contains(name) {
                tensor_cache.put(name.clone(), input.clone());
            }
        }

        // Seed the tensor_cache with any initializer-backed tensor
        // the slice metadata references.  The slicer's constant-
        // folding passes can turn intermediate tensors (e.g. a
        // Transpose over a constant) into initializers in the
        // transformed graph, while leaving downstream slice
        // metadata pointing at the original tensor name.  ORT
        // does not emit those names among its named outputs (they
        // are not declared as graph outputs of combined.onnx and
        // have no producing node), so without this seed the
        // subsequent `tensor_cache.get` in `all_circuit_work` fails
        // with `tensor '<name>' not found in store` and the whole
        // run aborts before a single DSlice gets dispatched.
        seed_tensor_cache_from_initializers(&combined_path, &model_meta, &mut tensor_cache)?;

        let chain = build_execution_chain(&model_meta, slices_dir)?;
        let run_meta = build_run_metadata(&model_meta, slices_dir, &chain)?;

        let mut pending_slices = HashSet::new();
        for slice in &model_meta.slices {
            let slice_id = format!("slice_{}", slice.index);
            let node = chain.nodes.get(&slice_id).ok_or_else(|| {
                DsperseError::Pipeline(format!("execution chain missing node for {slice_id}"))
            })?;
            if node.use_circuit {
                pending_slices.insert(slice_id);
            }
        }

        tracing::info!(
            total_slices = model_meta.slices.len(),
            circuit_slices = pending_slices.len(),
            cached_tensors = tensor_cache.len(),
            "combined inference complete, all circuit work queued"
        );

        Ok(Self {
            tensor_cache,
            model_meta,
            run_meta,
            execution_chain: chain,
            slices_dir: slices_dir.to_path_buf(),
            pending_slices,
            failed_slices: HashSet::new(),
        })
    }

    pub fn all_circuit_work(&self) -> Result<Vec<SliceWork>> {
        let mut work_items = Vec::with_capacity(self.pending_slices.len());

        for slice in &self.model_meta.slices {
            let slice_id = format!("slice_{}", slice.index);
            if !self.pending_slices.contains(&slice_id) {
                continue;
            }

            let node = self.execution_chain.nodes.get(&slice_id).ok_or_else(|| {
                DsperseError::Pipeline(format!("execution chain missing node for {slice_id}"))
            })?;

            let meta = self.run_meta.slices.get(&slice_id).ok_or_else(|| {
                DsperseError::Pipeline(format!("run metadata missing slice {slice_id}"))
            })?;

            let strategy = ExecutionStrategy::from_metadata(meta, node.use_circuit)?;
            let (input, named_inputs) = match strategy {
                ExecutionStrategy::ChannelSplit(cs) => {
                    let t = self.tensor_cache.get(&cs.input_name)?.clone();
                    (t, Vec::new())
                }
                ExecutionStrategy::DimSplit(ds) => {
                    let t = self.tensor_cache.get(&ds.input_name)?.clone();
                    (t, Vec::new())
                }
                ExecutionStrategy::Tiled(tiling) => {
                    let t = self.tensor_cache.get(&tiling.input_name)?.clone();
                    (t, Vec::new())
                }
                ExecutionStrategy::Single { .. } => {
                    let filtered = &meta.dependencies.filtered_inputs;
                    let mut named = Vec::with_capacity(filtered.len());
                    let mut flat_elems: Vec<f64> = Vec::new();
                    for name in filtered {
                        let arr = self.tensor_cache.get(name)?;
                        named.push((name.clone(), arr.clone()));
                        flat_elems.extend(arr.iter());
                    }
                    let concatenated = ndarray::ArrayD::from_shape_vec(
                        ndarray::IxDyn(&[flat_elems.len()]),
                        flat_elems,
                    )
                    .map_err(|e| DsperseError::Pipeline(format!("flatten inputs: {e}")))?;
                    (concatenated, named)
                }
            };

            work_items.push(SliceWork {
                slice_id,
                input,
                named_inputs,
                backend: node.backend,
                use_circuit: node.use_circuit,
                tiling: meta.tiling.clone(),
                channel_split: meta.channel_split.clone(),
                circuit_path: node.circuit_path.clone(),
                onnx_path: node.onnx_path.clone(),
                slice_meta: meta.clone(),
            });
        }

        Ok(work_items)
    }

    pub fn mark_slice_done(&mut self, slice_id: &str) -> bool {
        self.pending_slices.remove(slice_id)
    }

    pub fn mark_slice_failed(&mut self, slice_id: &str) -> bool {
        let was_pending = self.pending_slices.remove(slice_id);
        if was_pending {
            self.failed_slices.insert(slice_id.to_string());
        }
        was_pending
    }

    pub fn is_slice_failed(&self, slice_id: &str) -> bool {
        self.failed_slices.contains(slice_id)
    }

    pub fn failed_count(&self) -> usize {
        self.failed_slices.len()
    }

    pub fn is_complete(&self) -> bool {
        self.pending_slices.is_empty()
    }

    pub fn model_meta(&self) -> &ModelMetadata {
        &self.model_meta
    }

    pub fn final_output(&self) -> Option<&ArrayD<f64>> {
        let last_slice = self.model_meta.slices.last()?;
        let slice_id = format!("slice_{}", last_slice.index);
        let meta = self.run_meta.slices.get(&slice_id)?;

        let strategy = ExecutionStrategy::from_metadata(meta, false).ok()?;
        match strategy.output_name() {
            Some(name) => self.tensor_cache.try_get(name),
            None => {
                let output_name = meta.dependencies.output.first()?;
                self.tensor_cache.try_get(output_name)
            }
        }
    }

    pub fn expected_slice_outputs(&self, slice_id: &str) -> Option<Vec<f64>> {
        let meta = self.run_meta.slices.get(slice_id)?;
        let output_names = &meta.dependencies.output;
        self.outputs_for_names(output_names)
    }

    pub fn outputs_for_names(&self, names: &[String]) -> Option<Vec<f64>> {
        let mut flat = Vec::new();
        for name in names {
            let tensor = self.tensor_cache.try_get(name)?;
            flat.extend(tensor.iter());
        }
        if flat.is_empty() { None } else { Some(flat) }
    }

    pub fn slice_tile_counts(&self) -> (usize, usize, HashMap<String, usize>) {
        let total_slices = self.model_meta.slices.len();
        let mut map = HashMap::with_capacity(total_slices);
        let mut total_tiles = 0usize;
        for s in &self.model_meta.slices {
            let tiles = s.tiling.as_ref().map(|t| t.num_tiles).unwrap_or(1);
            map.insert(format!("slice_{}", s.index), tiles);
            total_tiles += tiles;
        }
        (total_slices, total_tiles, map)
    }

    pub fn slices_dir(&self) -> &Path {
        &self.slices_dir
    }

    pub fn pending_count(&self) -> usize {
        self.pending_slices.len()
    }
}

fn run_combined_onnx(
    combined_path: &Path,
    input: &ArrayD<f64>,
    declared_inputs: &[String],
) -> Result<NamedOutputs> {
    if declared_inputs.len() == 1 {
        let input_flat: Vec<f64> = input.iter().copied().collect();
        let input_shape = input.shape();
        crate::backend::onnx::run_inference_named(combined_path, &input_flat, input_shape)
    } else {
        Err(DsperseError::Pipeline(format!(
            "combined mode requires single input, got {}",
            declared_inputs.len()
        )))
    }
}

/// Populate `tensor_cache` with any combined-graph initializer
/// whose name appears in slice metadata as a `filtered_input` or a
/// declared `output`.  Without this, a slice that depends on a
/// constant-folded tensor (one the slicer turned from a node
/// output into an initializer) would fail at the
/// `tensor_cache.get(name)` call in `all_circuit_work` even though
/// the value is right there in the combined ONNX.
fn seed_tensor_cache_from_initializers(
    combined_path: &Path,
    model_meta: &ModelMetadata,
    tensor_cache: &mut TensorStore,
) -> Result<()> {
    let needed: HashSet<&str> = model_meta
        .slices
        .iter()
        .flat_map(|s| {
            s.dependencies
                .filtered_inputs
                .iter()
                .chain(s.dependencies.output.iter())
        })
        .map(String::as_str)
        .collect();
    if needed.is_empty() {
        return Ok(());
    }

    let model = crate::slicer::onnx_proto::load_model(combined_path)?;
    let graph = match &model.graph {
        Some(g) => g,
        None => return Ok(()),
    };

    let mut seeded = 0usize;
    for init in &graph.initializer {
        if !needed.contains(init.name.as_str()) {
            continue;
        }
        if tensor_cache.contains(&init.name) {
            continue;
        }
        // Negative dims would silently wrap to huge positive
        // values via `as usize`; reject up front so a malformed
        // initialiser surfaces an error here instead of
        // allocating a multi-petabyte array below.
        let shape: Vec<usize> = match init
            .dims
            .iter()
            .map(|&d| usize::try_from(d))
            .collect::<std::result::Result<Vec<_>, _>>()
        {
            Ok(s) => s,
            Err(e) => {
                tracing::debug!(
                    name = %init.name,
                    dims = ?init.dims,
                    error = %e,
                    "skipping initializer-backed slice tensor: invalid (negative) dimension"
                );
                continue;
            }
        };
        // Use checked_mul so an arithmetic overflow surfaces as a
        // skip (and the slice executor downstream produces a
        // clearer error if it actually needed the value), instead
        // of wrapping silently and mis-comparing against
        // `data.len()`.
        let expected: Option<usize> = shape.iter().try_fold(1usize, |acc, &d| acc.checked_mul(d));
        let Some(expected) = expected else {
            tracing::debug!(
                name = %init.name,
                dims = ?init.dims,
                "skipping initializer-backed slice tensor: shape product overflowed usize"
            );
            continue;
        };
        // Decode straight to f64 so DOUBLE / INT64 initialisers
        // keep their full precision -- the previous f32-then-widen
        // chain truncated DOUBLE mantissas and silently lost
        // precision on INT64 magnitudes outside f32's exact range.
        let data: Vec<f64> = crate::slicer::onnx_proto::tensor_to_f64(init);
        if data.len() != expected {
            // Skip rather than fail: an initialiser whose declared
            // shape doesn't match its element count can still be
            // useful elsewhere (some quantised tensors store packed
            // bytes), but we cannot reshape it into ArrayD<f64>
            // here without guessing.  Leave it to the slice ONNX
            // executor to surface a clearer error if it actually
            // needs the value.
            tracing::debug!(
                name = %init.name,
                declared_shape = ?shape,
                declared_elements = expected,
                actual_elements = data.len(),
                "skipping initializer-backed slice tensor: declared shape != element count"
            );
            continue;
        }
        let arr = ArrayD::from_shape_vec(IxDyn(&shape), data).map_err(|e| {
            DsperseError::Pipeline(format!(
                "seed initializer-backed tensor '{}' from combined.onnx: {e}",
                init.name
            ))
        })?;
        tensor_cache.put(init.name.clone(), arr);
        seeded += 1;
    }
    if seeded > 0 {
        tracing::info!(
            seeded,
            "seeded tensor_cache with constant-folded slice-input initializers"
        );
    }
    Ok(())
}


================================================
FILE: crates/dsperse/src/pipeline/compiler.rs
================================================
use std::collections::HashMap;
use std::path::{Path, PathBuf};

use rayon::prelude::*;

use crate::backend::jstprove::JstproveBackend;
use crate::converter;
use crate::error::{DsperseError, Result};
use crate::schema::metadata::ModelMetadata;
use crate::slicer::autotiler::estimate_slice_constraints;
use crate::slicer::onnx_proto;
use crate::utils::paths::{find_metadata_path, slice_dir_path};

type CircuitCache = std::sync::Mutex<HashMap<String, PathBuf>>;

enum CompileOutcome {
    Compiled,
    CompiledChannelSplit {
        group_circuits: Vec<(usize, String)>,
    },
    CompiledDimSplit,
    Skipped,
    SkippedOverSize {
        estimated: u64,
        threshold: u64,
    },
}

/// Summary of a compile_slices invocation.  The pass returns Ok
/// even when individual slice compilations fail, so callers must
/// inspect `failed` to decide whether to proceed (e.g. allow
/// partial-coverage ONNX fallback) or abort.  Keeping the
/// compiled count explicit lets the CLI / analyze command
/// report a structured summary instead of inferring success from
/// log lines.
#[derive(Debug, Default)]
pub struct CompileReport {
    pub compiled: usize,
    pub failed: Vec<(usize, DsperseError)>,
}

impl CompileReport {
    /// Return Ok(self) when every slice compiled cleanly.  Otherwise
    /// return a generic Pipeline error; callers layer their own
    /// actionable guidance on top (the CLI mentions its
    /// --allow-onnx-fallback flag, the Python binding mentions the
    /// `allow_onnx_fallback` keyword).  Keeping the library message
    /// surface-agnostic avoids leaking CLI conventions into the
    /// Python / Rust API error stream.
    pub fn ok_if_no_failures(self) -> Result<Self> {
        if self.failed.is_empty() {
            Ok(self)
        } else {
            Err(DsperseError::Pipeline(format!(
                "compile_slices: {} slice(s) failed to compile; the caller must opt in to partial coverage before proceeding",
                self.failed.len()
            )))
        }
    }
}

/// Backfill split metadata fields that only become resolvable after
/// slicing (channel_split.groups populated from disk,
/// dim_split.template_path inferred from the materialized template
/// ONNX), and strip dim_split entries whose template could not be
/// materialized.  Called from both compile_slices and analyze_slices
/// so the two classifications agree on what actually counts as a
/// channel- or dim-split slice.  Persists the normalised metadata
/// back to disk when any field changes.
fn normalize_split_metadata(
    slices_dir: &Path,
    meta_path: &Path,
    metadata: &mut ModelMetadata,
) -> Result<()> {
    if metadata.original_model_path.is_some() {
        crate::slicer::materializer::ensure_all_slices_materialized(slices_dir, metadata)?;
    }

    let mut metadata_dirty = false;
    for slice in &mut metadata.slices {
        if let Some(ref mut cs) = slice.channel_split
            && cs.groups.is_empty()
        {
            let populated = populate_channel_split_groups(slices_dir, slice.index, cs)?;
            if populated {
                metadata_dirty = true;
            }
        }
        if let Some(ref mut ds) = slice.dim_split
            && ds.template_path.is_none()
        {
            let tmpl_rel = format!("slice_{}/payload/dim_template.onnx", slice.index);
            if slices_dir.join(&tmpl_rel).exists() {
                ds.template_path = Some(tmpl_rel);
                metadata_dirty = true;
            }
        }
    }
    // Strip dim_split metadata from slices where template creation
    // failed (axis-separability rejection, unsupported split kind).
    // Leaving stale dim_split entries in the metadata causes
    // downstream runners and the packager to emit bundles that fail
    // at the strategy validation stage ("dim_split present but
    // template_path is missing").
    for slice in &mut metadata.slices {
        if slice
            .dim_split
            .as_ref()
            .is_some_and(|ds| ds.template_path.is_none())
        {
            tracing::info!(
                slice = slice.index,
                "stripping dim_split metadata (no template materialized)"
            );
            slice.dim_split = None;
            metadata_dirty = true;
        }
    }
    if metadata_dirty {
        metadata.save(meta_path)?;
        tracing::info!("persisted materialized split groups to metadata");
    }
    Ok(())
}

#[allow(clippy::too_many_arguments)]
pub fn compile_slices(
    slices_dir: &Path,
    backend: &JstproveBackend,
    proof_config: jstprove_circuits::api::ProofConfigType,
    parallel: usize,
    weights_as_inputs: bool,
    layers: Option<&[usize]>,
    jstprove_ops: &[&str],
    skip_compile_over_size: Option<u64>,
    holographic: bool,
) -> Result<CompileReport> {
    if holographic && proof_config != jstprove_circuits::api::ProofConfigType::GoldilocksExt4Whir {
        return Err(DsperseError::Pipeline(format!(
            "--holographic requires --proof-config goldilocks_ext4_whir; got {proof_config}"
        )));
    }
    let meta_path = find_metadata_path(slices_dir).ok_or_else(|| {
        DsperseError::Metadata(format!(
            "no {} found in slices directory",
            crate::utils::paths::METADATA_FILE
        ))
    })?;
    let mut metadata = ModelMetadata::load(&meta_path)?;
    normalize_split_metadata(slices_dir, &meta_path, &mut metadata)?;

    let slices: Vec<_> = metadata
        .slices
        .iter()
        .filter(|s| layers.is_none_or(|l| l.contains(&s.index)))
        .cloned()
        .collect();

    tracing::info!(total = slices.len(), "compiling slices");

    let exclude_from_wai: std::collections::HashSet<St
Download .txt
gitextract_ogh8afab/

├── .cargo/
│   ├── audit.toml
│   └── config.toml
├── .github/
│   └── workflows/
│       ├── integration_tests.yml
│       └── publish.yml
├── .gitignore
├── Cargo.toml
├── LICENSE
├── README.md
├── crates/
│   └── dsperse/
│       ├── Cargo.toml
│       ├── benches/
│       │   └── serialization.rs
│       ├── build.rs
│       ├── proto/
│       │   └── onnx.proto
│       ├── src/
│       │   ├── backend/
│       │   │   ├── jstprove.rs
│       │   │   ├── mod.rs
│       │   │   ├── onnx.rs
│       │   │   └── traits.rs
│       │   ├── cli/
│       │   │   └── mod.rs
│       │   ├── converter.rs
│       │   ├── error.rs
│       │   ├── lib.rs
│       │   ├── main.rs
│       │   ├── pipeline/
│       │   │   ├── channel_split.rs
│       │   │   ├── combined.rs
│       │   │   ├── compiler.rs
│       │   │   ├── dim_split.rs
│       │   │   ├── incremental.rs
│       │   │   ├── mod.rs
│       │   │   ├── packager.rs
│       │   │   ├── prover.rs
│       │   │   ├── publisher.rs
│       │   │   ├── runner.rs
│       │   │   ├── slice_cache.rs
│       │   │   ├── stage.rs
│       │   │   ├── strategy.rs
│       │   │   ├── tensor_store.rs
│       │   │   ├── tile_executor.rs
│       │   │   ├── tiled.rs
│       │   │   └── verifier.rs
│       │   ├── python.rs
│       │   ├── schema/
│       │   │   ├── execution.rs
│       │   │   ├── metadata.rs
│       │   │   ├── mod.rs
│       │   │   └── tiling.rs
│       │   ├── slicer/
│       │   │   ├── analyzer.rs
│       │   │   ├── autotiler.rs
│       │   │   ├── combiner.rs
│       │   │   ├── layernorm_fuse.rs
│       │   │   ├── materializer.rs
│       │   │   ├── mod.rs
│       │   │   ├── onnx_fold.rs
│       │   │   ├── onnx_proto.rs
│       │   │   ├── onnx_shapes.rs
│       │   │   ├── onnx_slicer.rs
│       │   │   ├── self_div_rewrite.rs
│       │   │   └── trace.rs
│       │   ├── utils/
│       │   │   ├── io.rs
│       │   │   ├── limits.rs
│       │   │   ├── metadata.rs
│       │   │   ├── mod.rs
│       │   │   └── paths.rs
│       │   └── version.rs
│       └── tests/
│           ├── integration_slice.rs
│           ├── schema_roundtrip.rs
│           └── sn2_contract.rs
├── deny.toml
├── docs/
│   ├── JSTPROVE_BACKEND.md
│   ├── overview.md
│   └── uv_packaging.md
├── pyproject.toml
├── python/
│   └── dsperse/
│       ├── __init__.py
│       └── cli.py
└── rust-toolchain.toml
Download .txt
SYMBOL INDEX (908 symbols across 50 files)

FILE: crates/dsperse/benches/serialization.rs
  function make_slice_metadata (line 14) | fn make_slice_metadata(index: usize) -> SliceMetadata {
  function make_model_metadata (line 40) | fn make_model_metadata(num_slices: usize) -> ModelMetadata {
  function make_run_metadata (line 62) | fn make_run_metadata(num_slices: usize) -> RunMetadata {
  function bench_roundtrip (line 151) | fn bench_roundtrip<T: Serialize + for<'de> Deserialize<'de>>(
  function serialization_benchmarks (line 182) | fn serialization_benchmarks(c: &mut Criterion) {

FILE: crates/dsperse/build.rs
  function main (line 1) | fn main() {

FILE: crates/dsperse/src/backend/jstprove.rs
  type JstproveBackend (line 20) | pub struct JstproveBackend {
    method new (line 35) | pub fn new() -> Self {
    method with_compress (line 39) | pub fn with_compress(mut self, compress: bool) -> Self {
    method compress (line 44) | pub fn compress(&self) -> bool {
    method load_bundle_cached (line 48) | pub fn load_bundle_cached(&self, path: &Path) -> Result<Arc<CompiledCi...
    method clear_cache (line 64) | pub fn clear_cache(&self) {
    method evict_cache_by_prefix (line 80) | pub fn evict_cache_by_prefix(&self, prefix: &Path) {
    method resolve_proof_config (line 106) | fn resolve_proof_config(bundle: &CompiledCircuit) -> Result<ProofConfi...
    method resolve_proof_config_from_manifest (line 131) | fn resolve_proof_config_from_manifest(&self, circuit_path: &Path) -> R...
    method compile (line 165) | pub fn compile(
    method witness (line 198) | pub fn witness(
    method witness_f64 (line 221) | pub fn witness_f64(
    method load_params (line 249) | pub fn load_params(&self, circuit_path: &Path) -> Result<Option<Circui...
    method prove (line 254) | pub fn prove(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Resu...
    method extract_outputs (line 262) | pub fn extract_outputs(
    method extract_outputs_full (line 282) | pub fn extract_outputs_full(
    method verify (line 296) | pub fn verify(
    method verify_and_extract (line 309) | pub fn verify_and_extract(
    method setup_holographic_vk (line 345) | pub fn setup_holographic_vk(&self, circuit_path: &Path) -> Result<()> {
    method prove_holographic (line 367) | pub fn prove_holographic(&self, circuit_path: &Path, witness_bytes: &[...
    method verify_holographic (line 379) | pub fn verify_holographic(&self, circuit_path: &Path, proof_bytes: &[u...
  method default (line 26) | fn default() -> Self {
  method prove (line 396) | fn prove(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Result<Vec...
  method verify (line 400) | fn verify(
  method witness_f64 (line 409) | fn witness_f64(
  function load_bundle (line 419) | fn load_bundle(circuit_path: &Path) -> Result<CompiledCircuit> {
  type WarmCircuit (line 428) | pub struct WarmCircuit {
    method load (line 437) | pub fn load(
    method witness_f64 (line 457) | pub fn witness_f64(&self, activations: &[f64]) -> Result<Vec<u8>> {
  function bundle_cache_starts_empty (line 478) | fn bundle_cache_starts_empty() {
  function backend_constructs_without_proof_config_state (line 485) | fn backend_constructs_without_proof_config_state() {
  function clear_cache_on_empty_succeeds (line 491) | fn clear_cache_on_empty_succeeds() {
  function clear_cache_removes_entries (line 499) | fn clear_cache_removes_entries() {
  function load_bundle_cached_returns_error_for_missing_path (line 518) | fn load_bundle_cached_returns_error_for_missing_path() {
  function resolve_proof_config_rejects_unstamped_bundle (line 526) | fn resolve_proof_config_rejects_unstamped_bundle() {

FILE: crates/dsperse/src/backend/onnx.rs
  function coerce_tdim_inputs (line 11) | pub fn coerce_tdim_inputs(inputs: &TVec<TValue>) -> TVec<TValue> {
  type NamedOutputs (line 29) | pub type NamedOutputs = HashMap<String, (Vec<f64>, Vec<usize>)>;
  function load_onnx_model (line 31) | fn load_onnx_model(onnx_path: &Path) -> Result<InferenceModel> {
  function resolve_concrete_shape (line 37) | fn resolve_concrete_shape(model: &InferenceModel, input_shape: &[usize])...
  function resolve_input_datum_type (line 66) | fn resolve_input_datum_type(model: &InferenceModel, idx: usize) -> Resul...
  function optimize_to_runnable (line 77) | fn optimize_to_runnable(
  function run_inference_with_coercion (line 91) | pub fn run_inference_with_coercion(
  function extract_all_outputs (line 155) | fn extract_all_outputs(result: &[TValue]) -> Result<NamedOutputs> {
  function load_runnable (line 165) | fn load_runnable(
  constant I64_SAFE_BOUND_F64 (line 176) | const I64_SAFE_BOUND_F64: f64 = I64_SAFE_BOUND as f64;
  function reject_non_finite (line 178) | fn reject_non_finite(v: f64, idx: usize, type_name: &str) -> Result<()> {
  function validate_integer_input (line 187) | fn validate_integer_input(
  function build_input_tvalue (line 213) | fn build_input_tvalue(input_data: &[f64], shape: &[usize], dt: DatumType...
  function run_single (line 301) | fn run_single(
  type WarmModel (line 312) | pub struct WarmModel {
    method load (line 319) | pub fn load(onnx_path: &Path, input_shape: &[usize]) -> Result<Self> {
    method run (line 328) | pub fn run(&self, input_data: &[f64]) -> Result<(Vec<f64>, Vec<usize>)> {
  function run_inference (line 334) | pub fn run_inference(
  function run_inference_named (line 344) | pub fn run_inference_named(
  function run_inference_multi (line 372) | pub fn run_inference_multi(
  function run_inference_multi_named (line 380) | pub fn run_inference_multi_named(
  function run_multi_inner (line 388) | fn run_multi_inner(
  function collect_output_names (line 478) | fn collect_output_names(model: &InferenceModel) -> Vec<String> {
  constant I64_SAFE_BOUND (line 493) | const I64_SAFE_BOUND: i64 = 9_007_199_254_740_992;
  function i64_to_f64_checked (line 495) | fn i64_to_f64_checked(v: i64, label: &str) -> Result<f64> {
  function u64_to_f64_checked (line 504) | fn u64_to_f64_checked(v: u64, label: &str) -> Result<f64> {
  function tvalue_to_f64 (line 513) | fn tvalue_to_f64(tv: &TValue, label: &str) -> Result<(Vec<f64>, Vec<usiz...
  function zip_named_outputs (line 593) | fn zip_named_outputs(names: &[String], result: &[TValue]) -> Result<Name...
  function extract_first_output (line 610) | fn extract_first_output(result: &[TValue]) -> Result<(Vec<f64>, Vec<usiz...
  constant TEST_OPS (line 621) | const TEST_OPS: &[&str] = &["Conv", "Gemm", "MatMul"];
  function run_inference_on_sliced_model (line 624) | fn run_inference_on_sliced_model() {
  function run_inference_nonexistent_model (line 666) | fn run_inference_nonexistent_model() {
  function warm_model_load_nonexistent (line 672) | fn warm_model_load_nonexistent() {
  function warm_model_load_and_run_on_slice (line 678) | fn warm_model_load_and_run_on_slice() {
  function zip_named_outputs_empty (line 722) | fn zip_named_outputs_empty() {
  function extract_first_output_empty (line 728) | fn extract_first_output_empty() {
  function build_input_tvalue_respects_declared_dtypes (line 734) | fn build_input_tvalue_respects_declared_dtypes() {
  function build_input_tvalue_rejects_non_finite (line 762) | fn build_input_tvalue_rejects_non_finite() {
  function build_input_tvalue_rejects_fractional_for_integer_dtypes (line 783) | fn build_input_tvalue_rejects_fractional_for_integer_dtypes() {
  function build_input_tvalue_rejects_out_of_range_for_integer_dtypes (line 803) | fn build_input_tvalue_rejects_out_of_range_for_integer_dtypes() {
  function safe_integer_bound_is_inclusive_on_both_sides (line 825) | fn safe_integer_bound_is_inclusive_on_both_sides() {
  function build_input_tvalue_rejects_i64_above_safe_integer_bound (line 845) | fn build_input_tvalue_rejects_i64_above_safe_integer_bound() {
  function build_input_tvalue_rejects_finite_f64_outside_f32_range (line 857) | fn build_input_tvalue_rejects_finite_f64_outside_f32_range() {
  function build_input_tvalue_rejects_non_boolean_for_bool_dtype (line 879) | fn build_input_tvalue_rejects_non_boolean_for_bool_dtype() {
  function write_uint8_cast_to_float_model (line 889) | fn write_uint8_cast_to_float_model(path: &Path) {
  function write_uint8_identity_model (line 905) | fn write_uint8_identity_model(path: &Path) {
  function warm_model_decodes_uint8_output (line 921) | fn warm_model_decodes_uint8_output() {
  function tvalue_to_f64_covers_added_integer_dtypes (line 935) | fn tvalue_to_f64_covers_added_integer_dtypes() {
  function warm_model_runs_non_f32_input_through_planner (line 964) | fn warm_model_runs_non_f32_input_through_planner() {
  function run_inference_multi_honors_per_input_dtype (line 983) | fn run_inference_multi_honors_per_input_dtype() {
  function resolve_input_datum_type_reads_concrete_model_dtype (line 996) | fn resolve_input_datum_type_reads_concrete_model_dtype() {

FILE: crates/dsperse/src/backend/traits.rs
  type ProofBackend (line 5) | pub trait ProofBackend: Send + Sync {
    method prove (line 6) | fn prove(&self, circuit_path: &Path, witness_bytes: &[u8]) -> Result<V...
    method verify (line 8) | fn verify(&self, circuit_path: &Path, witness_bytes: &[u8], proof_byte...
    method witness_f64 (line 11) | fn witness_f64(

FILE: crates/dsperse/src/cli/mod.rs
  function parse_proof_config (line 12) | fn parse_proof_config(value: &str) -> Result<ProofConfig> {
  constant VERSION (line 18) | pub const VERSION: &str = env!("DSPERSE_DISPLAY_VERSION");
  type Cli (line 22) | pub struct Cli {
  type Commands (line 30) | pub enum Commands {
  function dispatch (line 46) | pub fn dispatch(command: Commands) -> Result<()> {
  type SliceArgs (line 63) | pub struct SliceArgs {
  type CombineArgs (line 90) | pub struct CombineArgs {
  type CompileArgs (line 98) | pub struct CompileArgs {
  type RunArgs (line 154) | pub struct RunArgs {
  type ProveArgs (line 182) | pub struct ProveArgs {
  type VerifyArgs (line 194) | pub struct VerifyArgs {
  type PackageArgs (line 206) | pub struct PackageArgs {
  type PublishArgs (line 229) | pub struct PublishArgs {
  type FullRunArgs (line 253) | pub struct FullRunArgs {
  type SetupHolographicArgs (line 325) | pub struct SetupHolographicArgs {
  type CircuitOps (line 341) | struct CircuitOps(Vec<String>);
    method as_refs (line 344) | fn as_refs(&self) -> Vec<&str> {
  function resolve_circuit_ops (line 349) | fn resolve_circuit_ops(proof_system_str: &str, circuit_ops: Option<&str>...
  function resolve_slices_dir (line 385) | fn resolve_slices_dir(slices_dir: Option<PathBuf>, model_dir: &Path) -> ...
  function cmd_slice (line 389) | pub fn cmd_slice(args: SliceArgs) -> Result<()> {
  function cmd_combine (line 409) | pub fn cmd_combine(args: CombineArgs) -> Result<()> {
  function cmd_compile (line 417) | pub fn cmd_compile(args: CompileArgs) -> Result<()> {
  function cmd_run (line 448) | pub fn cmd_run(args: RunArgs) -> Result<()> {
  function cmd_prove (line 474) | pub fn cmd_prove(args: ProveArgs) -> Result<()> {
  function cmd_verify (line 482) | pub fn cmd_verify(args: VerifyArgs) -> Result<()> {
  function cmd_package (line 490) | pub fn cmd_package(args: PackageArgs) -> Result<()> {
  function cmd_publish (line 518) | pub fn cmd_publish(args: PublishArgs) -> Result<()> {
  function cmd_full_run (line 551) | pub fn cmd_full_run(args: FullRunArgs) -> Result<()> {
  function cmd_setup_holographic (line 620) | pub fn cmd_setup_holographic(args: SetupHolographicArgs) -> Result<()> {
  type AnalyzeArgs (line 642) | pub struct AnalyzeArgs {
  type AnalyzeFormat (line 680) | pub enum AnalyzeFormat {
    method fmt (line 686) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
  function cmd_analyze (line 694) | fn cmd_analyze(args: AnalyzeArgs) -> Result<()> {
  function parse_index_spec (line 788) | fn parse_index_spec(spec: &str) -> Result<Vec<usize>> {
  function run_id (line 821) | fn run_id() -> String {
  function parse_index_spec_single (line 835) | fn parse_index_spec_single() {
  function parse_index_spec_multiple (line 840) | fn parse_index_spec_multiple() {
  function parse_index_spec_range (line 845) | fn parse_index_spec_range() {
  function parse_index_spec_mixed (line 850) | fn parse_index_spec_mixed() {
  function parse_index_spec_whitespace_tolerance (line 855) | fn parse_index_spec_whitespace_tolerance() {
  function parse_index_spec_empty_rejected (line 860) | fn parse_index_spec_empty_rejected() {
  function parse_index_spec_invalid_token (line 865) | fn parse_index_spec_invalid_token() {
  function parse_index_spec_reversed_range (line 870) | fn parse_index_spec_reversed_range() {
  function parse_index_spec_trailing_comma (line 875) | fn parse_index_spec_trailing_comma() {
  function run_id_format (line 880) | fn run_id_format() {
  function run_id_unique (line 889) | fn run_id_unique() {
  function cli_parse_slice_command (line 896) | fn cli_parse_slice_command() {
  function cli_parse_run_command (line 902) | fn cli_parse_run_command() {
  function cli_log_level_default (line 915) | fn cli_log_level_default() {
  function cli_log_level_override (line 921) | fn cli_log_level_override() {
  function cli_compile_with_layers (line 934) | fn cli_compile_with_layers() {
  function cli_run_parallel (line 951) | fn cli_run_parallel() {
  function cli_slice_with_tile_size (line 970) | fn cli_slice_with_tile_size() {
  function cli_parse_combine_command (line 987) | fn cli_parse_combine_command() {
  function cli_parse_combine_with_slices_dir (line 993) | fn cli_parse_combine_with_slices_dir() {
  function cli_run_combined_default_true (line 1013) | fn cli_run_combined_default_true() {
  function cli_run_combined_explicit_false (line 1030) | fn cli_run_combined_explicit_false() {
  function cli_compile_holographic_default_false (line 1049) | fn cli_compile_holographic_default_false() {
  function cli_compile_holographic_explicit_true (line 1059) | fn cli_compile_holographic_explicit_true() {
  function cli_full_run_holographic_explicit_true (line 1076) | fn cli_full_run_holographic_explicit_true() {
  function cli_setup_holographic_command (line 1093) | fn cli_setup_holographic_command() {
  function cli_setup_holographic_overwrite (line 1111) | fn cli_setup_holographic_overwrite() {
  function cli_compile_wai_default_true (line 1128) | fn cli_compile_wai_default_true() {
  function cli_compile_wai_explicit_false (line 1138) | fn cli_compile_wai_explicit_false() {
  function resolve_circuit_ops_invalid_proof_system (line 1155) | fn resolve_circuit_ops_invalid_proof_system() {
  function resolve_circuit_ops_unsupported_op (line 1161) | fn resolve_circuit_ops_unsupported_op() {
  function resolve_circuit_ops_empty_spec_rejected (line 1167) | fn resolve_circuit_ops_empty_spec_rejected() {
  function resolve_circuit_ops_whitespace_only_spec_rejected (line 1173) | fn resolve_circuit_ops_whitespace_only_spec_rejected() {
  function resolve_circuit_ops_valid_specific_ops (line 1179) | fn resolve_circuit_ops_valid_specific_ops() {
  function resolve_circuit_ops_none_returns_all (line 1188) | fn resolve_circuit_ops_none_returns_all() {
  function resolve_slices_dir_custom_path (line 1195) | fn resolve_slices_dir_custom_path() {
  function resolve_slices_dir_default_fallback (line 1201) | fn resolve_slices_dir_default_fallback() {

FILE: crates/dsperse/src/converter.rs
  function prepare_jstprove_artifacts (line 10) | pub fn prepare_jstprove_artifacts(
  function prepare_jstprove_artifacts_filtered (line 17) | pub fn prepare_jstprove_artifacts_filtered(
  function prepare_jstprove_artifacts_nonexistent_model (line 56) | fn prepare_jstprove_artifacts_nonexistent_model() {
  function prepare_jstprove_artifacts_with_weights_as_inputs (line 62) | fn prepare_jstprove_artifacts_with_weights_as_inputs() {

FILE: crates/dsperse/src/error.rs
  type Result (line 3) | pub type Result<T> = std::result::Result<T, DsperseError>;
  type DsperseError (line 6) | pub enum DsperseError {
    method io (line 42) | pub fn io(source: std::io::Error, path: impl Into<PathBuf>) -> Self {

FILE: crates/dsperse/src/main.rs
  function main (line 6) | fn main() {

FILE: crates/dsperse/src/pipeline/channel_split.rs
  function reshape_channel_split_output (line 16) | pub(crate) fn reshape_channel_split_output(
  function execute_channel_split (line 52) | pub(crate) fn execute_channel_split(
  function execute_channel_group (line 243) | fn execute_channel_group(

FILE: crates/dsperse/src/pipeline/combined.rs
  type CombinedRun (line 15) | pub struct CombinedRun {
    method new (line 26) | pub fn new(slices_dir: &Path, input: ArrayD<f64>) -> Result<Self> {
    method all_circuit_work (line 105) | pub fn all_circuit_work(&self) -> Result<Vec<SliceWork>> {
    method mark_slice_done (line 171) | pub fn mark_slice_done(&mut self, slice_id: &str) -> bool {
    method mark_slice_failed (line 175) | pub fn mark_slice_failed(&mut self, slice_id: &str) -> bool {
    method is_slice_failed (line 183) | pub fn is_slice_failed(&self, slice_id: &str) -> bool {
    method failed_count (line 187) | pub fn failed_count(&self) -> usize {
    method is_complete (line 191) | pub fn is_complete(&self) -> bool {
    method model_meta (line 195) | pub fn model_meta(&self) -> &ModelMetadata {
    method final_output (line 199) | pub fn final_output(&self) -> Option<&ArrayD<f64>> {
    method expected_slice_outputs (line 214) | pub fn expected_slice_outputs(&self, slice_id: &str) -> Option<Vec<f64...
    method outputs_for_names (line 220) | pub fn outputs_for_names(&self, names: &[String]) -> Option<Vec<f64>> {
    method slice_tile_counts (line 229) | pub fn slice_tile_counts(&self) -> (usize, usize, HashMap<String, usiz...
    method slices_dir (line 241) | pub fn slices_dir(&self) -> &Path {
    method pending_count (line 245) | pub fn pending_count(&self) -> usize {
  function run_combined_onnx (line 250) | fn run_combined_onnx(
  function seed_tensor_cache_from_initializers (line 274) | fn seed_tensor_cache_from_initializers(

FILE: crates/dsperse/src/pipeline/compiler.rs
  type CircuitCache (line 14) | type CircuitCache = std::sync::Mutex<HashMap<String, PathBuf>>;
  type CompileOutcome (line 16) | enum CompileOutcome {
  type CompileReport (line 37) | pub struct CompileReport {
    method ok_if_no_failures (line 50) | pub fn ok_if_no_failures(self) -> Result<Self> {
  function normalize_split_metadata (line 70) | fn normalize_split_metadata(
  function compile_slices (line 127) | pub fn compile_slices(
  type SliceAnalysis (line 298) | struct SliceAnalysis {
  constant DATA_MOVEMENT_OPS (line 303) | const DATA_MOVEMENT_OPS: &[&str] = &[
  function analyze_slice_onnx (line 319) | fn analyze_slice_onnx(onnx_path: &Path, jstprove_ops: &[&str]) -> Result...
  function compute_circuit_signature (line 340) | pub(super) fn compute_circuit_signature(tmpl_path: &Path, curve: Option<...
  function compute_bundle_signature (line 437) | pub(super) fn compute_bundle_signature(
  function summarize_onnx_ops (line 501) | fn summarize_onnx_ops(onnx_path: &Path) -> String {
  type SliceAnalysisReport (line 528) | pub struct SliceAnalysisReport {
  function derive_slice_report_metrics (line 550) | fn derive_slice_report_metrics(
  function analyze_slices (line 563) | pub fn analyze_slices(
  function estimate_onnx_constraints (line 733) | fn estimate_onnx_constraints(onnx_path: &Path) -> Result<u64> {
  function extract_graph_shapes (line 743) | fn extract_graph_shapes(
  function normalize_slice_for_backend (line 794) | fn normalize_slice_for_backend(onnx_path: &Path) -> Result<Option<std::p...
  function compile_single_slice (line 806) | fn compile_single_slice(
  type HolographicSetupReport (line 979) | pub struct HolographicSetupReport {
    method ok_if_no_failures (line 986) | pub fn ok_if_no_failures(self) -> Result<Self> {
  function setup_holographic_for_slices (line 1008) | pub fn setup_holographic_for_slices(
  function run_holographic_setup (line 1098) | fn run_holographic_setup(
  function populate_channel_split_groups (line 1127) | fn populate_channel_split_groups(
  function compile_channel_split_slice (line 1187) | fn compile_channel_split_slice(
  function compile_dim_split_template (line 1380) | fn compile_dim_split_template(
  function copy_dir_recursive (line 1540) | fn copy_dir_recursive(src: &Path, dst: &Path) -> Result<()> {
  function resolve_compile_onnx (line 1555) | fn resolve_compile_onnx(
  function test_models_dir (line 1584) | fn test_models_dir() -> std::path::PathBuf {
  function make_slice_metadata (line 1588) | fn make_slice_metadata(index: usize, path: &str) -> SliceMetadata {
  constant TEST_OPS (line 1611) | const TEST_OPS: &[&str] = &["Conv", "Gemm", "MatMul"];
  function analyze_slice_onnx_nonexistent (line 1614) | fn analyze_slice_onnx_nonexistent() {
  function analyze_slice_onnx_test_model (line 1620) | fn analyze_slice_onnx_test_model() {
  function analyze_slice_onnx_with_initializers (line 1632) | fn analyze_slice_onnx_with_initializers() {
  function analyze_slice_onnx_without_initializers (line 1654) | fn analyze_slice_onnx_without_initializers() {
  function resolve_compile_onnx_no_tiling (line 1671) | fn resolve_compile_onnx_no_tiling() {
  function resolve_compile_onnx_with_tile (line 1683) | fn resolve_compile_onnx_with_tile() {
  function resolve_compile_onnx_tile_missing_falls_back (line 1723) | fn resolve_compile_onnx_tile_missing_falls_back() {
  function write_identity_onnx (line 1761) | fn write_identity_onnx(path: &Path) {
  function bundle_signature_differs_from_circuit_signature_even_without_metadata (line 1780) | fn bundle_signature_differs_from_circuit_signature_even_without_metadata...
  function bundle_signature_disambiguates_vk_presence (line 1803) | fn bundle_signature_disambiguates_vk_presence() {
  function bundle_signature_disambiguates_proof_config_and_wai_on_metadata_branch (line 1824) | fn bundle_signature_disambiguates_proof_config_and_wai_on_metadata_branc...

FILE: crates/dsperse/src/pipeline/dim_split.rs
  function execute_dim_split (line 13) | pub(crate) fn execute_dim_split(
  function execute_matmul_dim_split (line 63) | fn execute_matmul_dim_split(
  function execute_generic_dim_split (line 294) | fn execute_generic_dim_split(
  function resolve_output_shape (line 420) | fn resolve_output_shape(

FILE: crates/dsperse/src/pipeline/incremental.rs
  type SliceWork (line 14) | pub struct SliceWork {
  type SliceExecutionResult (line 27) | pub struct SliceExecutionResult {
  type IncrementalRun (line 33) | pub struct IncrementalRun {
    method new (line 44) | pub fn new(slices_dir: &Path, input: ArrayD<f64>) -> Result<Self> {
    method next_slice (line 78) | pub fn next_slice(&self) -> Result<Option<SliceWork>> {
    method apply_result (line 130) | pub fn apply_result(&mut self, result: SliceExecutionResult) -> Result...
    method is_complete (line 194) | pub fn is_complete(&self) -> bool {
    method final_output (line 198) | pub fn final_output(&self) -> Option<&ArrayD<f64>> {
    method into_run_metadata (line 213) | pub fn into_run_metadata(self) -> RunMetadata {
    method slices_dir (line 220) | pub fn slices_dir(&self) -> &Path {
    method model_meta (line 224) | pub fn model_meta(&self) -> &ModelMetadata {
    method run_meta (line 228) | pub fn run_meta(&self) -> &RunMetadata {
    method tensor_cache (line 232) | pub fn tensor_cache(&self) -> &TensorStore {

FILE: crates/dsperse/src/pipeline/packager.rs
  type PackageConfig (line 16) | pub struct PackageConfig {
  type PackageResult (line 26) | pub struct PackageResult {
  type ArtifactRef (line 34) | struct ArtifactRef {
  type Manifest (line 42) | struct Manifest {
  type ModelInfo (line 52) | struct ModelInfo {
  type InputSchema (line 70) | struct InputSchema {
  type ComponentEntry (line 77) | struct ComponentEntry {
  type WeightRef (line 90) | struct WeightRef {
  type DagNode (line 98) | struct DagNode {
  constant VALID_CURVES (line 106) | const VALID_CURVES: &[&str] = &[
  function normalize_curve (line 115) | fn normalize_curve(curve: Option<&str>) -> Result<Option<String>> {
  function package_content_addressed (line 130) | pub fn package_content_addressed(
  function resolve_circuit_dir (line 306) | fn resolve_circuit_dir(slices_dir: &Path, slice: &SliceMetadata) -> Resu...
  type ComponentSource (line 334) | enum ComponentSource {
  function resolve_source_onnx (line 339) | fn resolve_source_onnx(slices_dir: &Path, slice: &SliceMetadata) -> Resu...
  function list_bundle_files (line 387) | fn list_bundle_files(dir: &Path) -> Result<Vec<String>> {
  function extract_component (line 413) | fn extract_component(
  function collect_payload_blobs (line 454) | fn collect_payload_blobs(
  function reject_symlink_path (line 504) | fn reject_symlink_path(path: &Path) -> Result<()> {
  function reject_symlink (line 517) | fn reject_symlink(entry: &walkdir::DirEntry) -> Result<()> {
  function hash_named_file (line 527) | fn hash_named_file(path: &Path, filename: &str, curve: Option<&str>) -> ...
  function sha256_bytes (line 554) | fn sha256_bytes(data: &[u8]) -> String {
  function encode_hex (line 560) | fn encode_hex(bytes: &[u8]) -> String {
  function copy_files_flat (line 569) | fn copy_files_flat(source_dir: &Path, dest_dir: &Path) -> Result<u64> {
  function write_minimal_onnx (line 605) | fn write_minimal_onnx(path: &Path, input_dim: i64) {
  function create_test_model_metadata (line 623) | fn create_test_model_metadata(slices_dir: &Path, count: usize) {
  function ensure_test_artifacts (line 697) | fn ensure_test_artifacts(slices_dir: &Path) {
  function test_content_addressed_output_structure (line 705) | fn test_content_addressed_output_structure() {
  function test_missing_model_onnx_fails (line 746) | fn test_missing_model_onnx_fails() {
  function test_symlinked_artifact_rejected (line 769) | fn test_symlinked_artifact_rejected() {
  function test_manifest_structure (line 794) | fn test_manifest_structure() {
  function test_component_files_exist (line 840) | fn test_component_files_exist() {
  function test_wb_files_exist (line 871) | fn test_wb_files_exist() {
  function test_hash_determinism (line 903) | fn test_hash_determinism() {
  function test_curve_changes_hash (line 945) | fn test_curve_changes_hash() {
  function test_curve_changes_hash_uncompiled_onnx (line 1005) | fn test_curve_changes_hash_uncompiled_onnx() {
  function test_invalid_curve_rejected (line 1108) | fn test_invalid_curve_rejected() {
  function test_curve_normalization (line 1140) | fn test_curve_normalization() {
  function test_deduplication_shared_circuits (line 1194) | fn test_deduplication_shared_circuits() {
  function test_uncompiled_onnx_only_slice (line 1281) | fn test_uncompiled_onnx_only_slice() {
  function test_missing_artifact_errors (line 1361) | fn test_missing_artifact_errors() {
  function test_path_traversal_rejected (line 1430) | fn test_path_traversal_rejected() {
  function test_nonexistent_dir (line 1501) | fn test_nonexistent_dir() {
  function test_identical_bytes_different_filenames_distinct_hashes (line 1516) | fn test_identical_bytes_different_filenames_distinct_hashes() {
  function test_symlink_payload_rejected (line 1610) | fn test_symlink_payload_rejected() {

FILE: crates/dsperse/src/pipeline/prover.rs
  function prove_run (line 9) | pub fn prove_run(

FILE: crates/dsperse/src/pipeline/publisher.rs
  constant REQUEST_TIMEOUT (line 9) | const REQUEST_TIMEOUT: Duration = Duration::from_secs(30);
  constant UPLOAD_TIMEOUT (line 10) | const UPLOAD_TIMEOUT: Duration = Duration::from_secs(300);
  type PublishConfig (line 12) | pub struct PublishConfig {
  type PublishResult (line 24) | pub struct PublishResult {
  function auth_header (line 32) | fn auth_header(token: &str) -> String {
  function publish (line 36) | pub fn publish(dir: &Path, config: &PublishConfig) -> Result<PublishResu...
  function publish_async (line 45) | async fn publish_async(dir: &Path, config: &PublishConfig) -> Result<Pub...

FILE: crates/dsperse/src/pipeline/runner.rs
  type RunConfig (line 26) | pub struct RunConfig {
  method default (line 34) | fn default() -> Self {
  function resolve_circuit_path_required (line 44) | fn resolve_circuit_path_required(
  function resolve_circuit_path_optional (line 55) | pub(crate) fn resolve_circuit_path_optional(
  function load_model_metadata (line 64) | pub fn load_model_metadata(slices_dir: &Path) -> Result<ModelMetadata> {
  function validate_weights_onnx (line 86) | fn validate_weights_onnx(
  function load_donor_model (line 118) | fn load_donor_model(
  function donor_init_map (line 134) | fn donor_init_map(
  function run_inference (line 150) | pub fn run_inference(
  function run_combined_inference (line 397) | fn run_combined_inference(
  function execute_slice (line 767) | fn execute_slice(
  function execute_single (line 872) | fn execute_single(
  function store_named_outputs (line 986) | fn store_named_outputs(
  function collect_named_outputs (line 997) | fn collect_named_outputs(
  function run_onnx_inference (line 1019) | pub(crate) fn run_onnx_inference(onnx_path: &Path, input: &ArrayD<f64>) ...
  function run_onnx_inference_named (line 1029) | pub(crate) fn run_onnx_inference_named(
  function run_onnx_inference_multi_named (line 1038) | pub(crate) fn run_onnx_inference_multi_named(
  function build_execution_chain (line 1057) | pub(crate) fn build_execution_chain(
  function build_run_metadata (line 1126) | pub(crate) fn build_run_metadata(
  function extract_initializers_from_map (line 1171) | pub(crate) fn extract_initializers_from_map(
  function extract_onnx_initializers (line 1213) | pub fn extract_onnx_initializers(
  function flatten_cached_inputs (line 1226) | pub(crate) fn flatten_cached_inputs(cache: &TensorStore, names: &[String...
  function generate_wai_witness (line 1236) | pub(crate) fn generate_wai_witness(
  function make_tiling (line 1268) | fn make_tiling(
  function reshape_to_4d_valid (line 1302) | fn reshape_to_4d_valid() {
  function reshape_to_4d_single_element (line 1309) | fn reshape_to_4d_single_element() {
  function reshape_to_4d_mismatch (line 1317) | fn reshape_to_4d_mismatch() {
  function reshape_to_4d_empty (line 1323) | fn reshape_to_4d_empty() {
  function split_into_tiles_2x2_no_halo (line 1329) | fn split_into_tiles_2x2_no_halo() {
  function split_into_tiles_with_halo (line 1341) | fn split_into_tiles_with_halo() {
  function split_into_tiles_negative_halo_rejected (line 1353) | fn split_into_tiles_negative_halo_rejected() {
  function split_into_tiles_batch_gt1_rejected (line 1360) | fn split_into_tiles_batch_gt1_rejected() {
  function reconstruct_from_tiles_2x2 (line 1367) | fn reconstruct_from_tiles_2x2() {
  function reconstruct_from_tiles_empty (line 1388) | fn reconstruct_from_tiles_empty() {
  function reconstruct_from_tiles_wrong_element_count (line 1394) | fn reconstruct_from_tiles_wrong_element_count() {
  function reconstruct_from_tiles_wrong_tile_count (line 1401) | fn reconstruct_from_tiles_wrong_tile_count() {
  function split_reconstruct_roundtrip (line 1420) | fn split_reconstruct_roundtrip() {
  function store_named_outputs_basic (line 1442) | fn store_named_outputs_basic() {
  function store_named_outputs_missing_name_errors (line 1455) | fn store_named_outputs_missing_name_errors() {
  function store_named_outputs_partial_write_errors (line 1464) | fn store_named_outputs_partial_write_errors() {
  function run_config_default (line 1480) | fn run_config_default() {
  function multi_input_activation_concatenation_ordering (line 1489) | fn multi_input_activation_concatenation_ordering() {
  function multi_input_activation_missing_tensor_error (line 1520) | fn multi_input_activation_missing_tensor_error() {

FILE: crates/dsperse/src/pipeline/slice_cache.rs
  type SliceAssets (line 6) | pub struct SliceAssets {
    method load_from_dslice (line 12) | pub fn load_from_dslice(slices_dir: &Path, slice_id: &str) -> Result<S...

FILE: crates/dsperse/src/pipeline/stage.rs
  type PipelineStage (line 15) | pub enum PipelineStage {
    method execution_method (line 21) | fn execution_method(&self) -> ExecutionMethod {
    method action_label (line 28) | fn action_label(&self) -> &'static str {
    method past_label (line 35) | fn past_label(&self) -> &'static str {
    method error_label (line 42) | fn error_label(&self) -> &'static str {
  function run_pipeline_stage (line 50) | pub fn run_pipeline_stage(
  function execute_single_slice (line 160) | fn execute_single_slice(
  function execute_stage_operation (line 220) | fn execute_stage_operation(
  function execute_tiled_stage (line 274) | fn execute_tiled_stage(
  function execute_tile_stage_operation (line 356) | fn execute_tile_stage_operation(

FILE: crates/dsperse/src/pipeline/strategy.rs
  type ExecutionStrategy (line 6) | pub enum ExecutionStrategy<'a> {
  function from_metadata (line 14) | pub fn from_metadata(meta: &'a RunSliceMetadata, use_circuit: bool) -> R...
  function execution_method (line 52) | pub fn execution_method(&self) -> ExecutionMethod {
  function output_name (line 62) | pub fn output_name(&self) -> Option<&str> {

FILE: crates/dsperse/src/pipeline/tensor_store.rs
  type TensorStore (line 8) | pub struct TensorStore {
    method new (line 13) | pub fn new() -> Self {
    method get (line 17) | pub fn get(&self, name: &str) -> Result<&ArrayD<f64>> {
    method try_get (line 23) | pub fn try_get(&self, name: &str) -> Option<&ArrayD<f64>> {
    method put (line 27) | pub fn put(&mut self, name: String, tensor: ArrayD<f64>) {
    method contains (line 31) | pub fn contains(&self, name: &str) -> bool {
    method len (line 35) | pub fn len(&self) -> usize {
    method is_empty (line 39) | pub fn is_empty(&self) -> bool {
    method keys (line 43) | pub fn keys(&self) -> impl Iterator<Item = &String> {
    method as_map (line 47) | pub fn as_map(&self) -> &HashMap<String, ArrayD<f64>> {
    method gather (line 51) | pub fn gather(&self, names: &[String]) -> Result<ArrayD<f64>> {
  function put_and_get (line 62) | fn put_and_get() {
  function get_missing_returns_error (line 70) | fn get_missing_returns_error() {
  function try_get_missing_returns_none (line 76) | fn try_get_missing_returns_none() {
  function contains_check (line 82) | fn contains_check() {

FILE: crates/dsperse/src/pipeline/tile_executor.rs
  function resolve_tile_circuit (line 9) | pub fn resolve_tile_circuit(
  function execute_tiles (line 34) | pub fn execute_tiles<T, F>(parallel: usize, num_tiles: usize, op: F) -> ...
  function make_tiling (line 58) | fn make_tiling() -> TilingInfo {
  function resolve_tile_circuit_no_info (line 85) | fn resolve_tile_circuit_no_info() {
  function resolve_tile_circuit_with_default (line 92) | fn resolve_tile_circuit_with_default() {
  function resolve_tile_circuit_from_single_tile (line 100) | fn resolve_tile_circuit_from_single_tile() {
  function execute_tiles_collects_results (line 113) | fn execute_tiles_collects_results() {
  function execute_tiles_zero_tiles_errors (line 122) | fn execute_tiles_zero_tiles_errors() {

FILE: crates/dsperse/src/pipeline/tiled.rs
  function execute_tiled (line 22) | pub(crate) fn execute_tiled(
  function execute_combined_tiled (line 374) | pub(crate) fn execute_combined_tiled(
  function prepare_tiles_from_cache (line 566) | pub(crate) fn prepare_tiles_from_cache(
  function split_for_tiling (line 607) | pub fn split_for_tiling(input: &ArrayD<f64>, tiling: &TilingInfo) -> Res...
  function split_into_tiles (line 672) | pub fn split_into_tiles(input: &Array4<f64>, tiling: &TilingInfo) -> Res...
  function reconstruct_from_tiles (line 724) | pub fn reconstruct_from_tiles(
  function trim_to_original_dims (line 792) | pub(crate) fn trim_to_original_dims(arr: ArrayD<f64>, tiling: &TilingInf...
  function split_into_tiles_1d (line 818) | pub(crate) fn split_into_tiles_1d(
  function reconstruct_from_tiles_1d (line 866) | pub(crate) fn reconstruct_from_tiles_1d(
  function trim_to_original_seq (line 909) | pub(crate) fn trim_to_original_seq(arr: ArrayD<f64>, tiling: &TilingInfo...
  function prepare_fixed_segments_from_cache (line 927) | pub(crate) fn prepare_fixed_segments_from_cache(
  function reconstruct_from_fixed_segments (line 971) | pub(crate) fn reconstruct_from_fixed_segments(
  function reshape_to_4d (line 997) | pub(crate) fn reshape_to_4d(flat: &[f64], c: usize, h: usize, w: usize) ...
  function flatten_tile_inputs (line 1009) | pub(crate) fn flatten_tile_inputs(all_tiles: &[Vec<ArrayD<f64>>], tile_i...

FILE: crates/dsperse/src/pipeline/verifier.rs
  function verify_run (line 9) | pub fn verify_run(

FILE: crates/dsperse/src/python.rs
  function to_py_err (line 12) | fn to_py_err(e: DsperseError) -> PyErr {
  function to_pretty_json (line 30) | fn to_pretty_json<T: serde::Serialize>(value: &T) -> PyResult<String> {
  function resolve_ops (line 38) | fn resolve_ops(proof_system: &str, circuit_ops: Option<&[String]>) -> Py...
  function require_nonzero (line 58) | fn require_nonzero(parallel: usize) -> PyResult<()> {
  function slice_model (line 69) | fn slice_model(
  function compile_slices (line 99) | fn compile_slices(
  function run_inference (line 154) | fn run_inference(
  function prove_run (line 183) | fn prove_run(py: Python<'_>, run_dir: &str, slices_dir: &str, parallel: ...
  function verify_run (line 196) | fn verify_run(
  function cli_main (line 214) | fn cli_main(py: Python<'_>, argv: Option<Vec<String>>) -> PyResult<()> {
  function setup_holographic (line 243) | fn setup_holographic(
  function _native (line 267) | fn _native(m: &Bound<'_, PyModule>) -> PyResult<()> {

FILE: crates/dsperse/src/schema/execution.rs
  type ExecutionMethod (line 9) | pub enum ExecutionMethod {
    method fmt (line 20) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
  type TileResult (line 34) | pub struct TileResult {
    method failure (line 48) | pub fn failure(
    method success (line 64) | pub fn success(tile_idx: usize, method: Option<ExecutionMethod>, time_...
  type ExecutionInfo (line 77) | pub struct ExecutionInfo {
  type StrategyOutput (line 90) | pub struct StrategyOutput {
  type SliceResult (line 96) | pub struct SliceResult {
    method failure (line 112) | pub fn failure(
    method success (line 129) | pub fn success(slice_id: impl Into<String>, method: ExecutionMethod, t...
  type ExecutionNode (line 143) | pub struct ExecutionNode {
  method default (line 162) | fn default() -> Self {
  type ExecutionResultEntry (line 177) | pub struct ExecutionResultEntry {
  type ExecutionChain (line 188) | pub struct ExecutionChain {
    method get_result_for_slice (line 204) | pub fn get_result_for_slice(&self, slice_id: &str) -> Option<&Executio...
  type RunMetadata (line 212) | pub struct RunMetadata {
    method get_slice (line 228) | pub fn get_slice(&self, slice_id: &str) -> Option<&RunSliceMetadata> {
    method iter_circuit_slices (line 232) | pub fn iter_circuit_slices(&self) -> impl Iterator<Item = (&str, &RunS...
  function is_zero (line 245) | fn is_zero(v: &f64) -> bool {

FILE: crates/dsperse/src/schema/metadata.rs
  type BackendKind (line 9) | pub enum BackendKind {
    method fmt (line 17) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
  type TensorShape (line 26) | pub struct TensorShape {
  type Dependencies (line 34) | pub struct Dependencies {
  type CompilationFiles (line 44) | pub struct CompilationFiles {
  type BackendCompilation (line 56) | pub struct BackendCompilation {
  type Compilation (line 71) | pub struct Compilation {
  type SliceShapeWrapper (line 77) | pub struct SliceShapeWrapper {
  type SliceMetadata (line 83) | pub struct SliceMetadata {
    method split_strategy (line 111) | pub fn split_strategy(&self) -> Option<super::tiling::SplitStrategy<'_...
    method output_names (line 120) | pub fn output_names(&self) -> &[String] {
    method resolve_onnx (line 124) | pub fn resolve_onnx(
  type RunSliceMetadata (line 137) | pub struct RunSliceMetadata {
    method split_strategy (line 169) | pub fn split_strategy(&self) -> Option<super::tiling::SplitStrategy<'_...
  type ModelMetadata (line 180) | pub struct ModelMetadata {
    method load (line 214) | pub fn load(path: &std::path::Path) -> crate::error::Result<Self> {
    method save (line 219) | pub fn save(&self, path: &std::path::Path) -> crate::error::Result<()> {
    method stamp_version (line 231) | pub fn stamp_version(&mut self) {

FILE: crates/dsperse/src/schema/tiling.rs
  type SplitStrategy (line 4) | pub enum SplitStrategy<'a> {
  type TileInfo (line 11) | pub struct TileInfo {
  type TilingInfo (line 21) | pub struct TilingInfo {
    method all_input_names (line 67) | pub fn all_input_names(&self) -> Vec<&str> {
  type ChannelGroupInfo (line 77) | pub struct ChannelGroupInfo {
  type ChannelSplitInfo (line 93) | pub struct ChannelSplitInfo {
  function default_one (line 122) | fn default_one() -> usize {
  function default_four (line 126) | fn default_four() -> usize {
  function default_pair_zero (line 130) | fn default_pair_zero() -> [i64; 2] {
  function default_pair_one (line 134) | fn default_pair_one() -> [i64; 2] {
  function default_quad_zero (line 138) | fn default_quad_zero() -> [i64; 4] {
  function deserialize_halo (line 142) | fn deserialize_halo<'de, D>(deserializer: D) -> std::result::Result<[i64...
  type DimSplitKind (line 159) | pub enum DimSplitKind {
  type DimSplitInfo (line 167) | pub struct DimSplitInfo {
    method from_detection (line 203) | pub fn from_detection(
  function default_input_name (line 236) | fn default_input_name() -> String {
  function default_output_name (line 240) | fn default_output_name() -> String {

FILE: crates/dsperse/src/slicer/analyzer.rs
  type NodeAnalysis (line 11) | pub struct NodeAnalysis {
  type ParameterDetail (line 20) | pub struct ParameterDetail {
  type NodeDependencies (line 26) | pub struct NodeDependencies {
  type AnalysisResult (line 32) | pub struct AnalysisResult {
  function analyze (line 45) | pub fn analyze(model: &ModelProto, onnx_path: Option<&Path>) -> Result<A...
  function get_model_input_shapes (line 129) | fn get_model_input_shapes(
  function get_model_output_shapes (line 141) | fn get_model_output_shapes(graph: &GraphProto) -> Vec<Vec<i64>> {
  function get_model_output_names (line 145) | fn get_model_output_names(graph: &GraphProto) -> Vec<String> {
  function get_parameter_details (line 149) | fn get_parameter_details(
  function get_segment_dependencies (line 174) | pub fn get_segment_dependencies(
  function make_node (line 251) | fn make_node(
  function make_model_with_nodes (line 271) | fn make_model_with_nodes(nodes: Vec<onnx_proto::NodeProto>) -> ModelProto {
  function make_model_with_initializers (line 278) | fn make_model_with_initializers(
  function analyze_empty_model (line 289) | fn analyze_empty_model() {
  function analyze_single_relu (line 298) | fn analyze_single_relu() {
  function analyze_conv_with_initializer (line 308) | fn analyze_conv_with_initializer() {
  function analyze_non_param_op_has_no_details (line 328) | fn analyze_non_param_op_has_no_details() {
  function analyze_model_no_graph (line 340) | fn analyze_model_no_graph() {
  function analyze_dependencies_tracked (line 349) | fn analyze_dependencies_tracked() {
  function analyze_unnamed_nodes_get_generated_keys (line 366) | fn analyze_unnamed_nodes_get_generated_keys() {
  function get_segment_dependencies_basic (line 375) | fn get_segment_dependencies_basic() {
  function make_attribute_graph (line 420) | fn make_attribute_graph(
  function analyze_loop_captures_outer_scope_refs (line 433) | fn analyze_loop_captures_outer_scope_refs() {
  function analyze_if_captures_outer_scope_refs (line 515) | fn analyze_if_captures_outer_scope_refs() {
  function segment_deps_include_subgraph_outer_refs (line 611) | fn segment_deps_include_subgraph_outer_refs() {
  function analyze_nested_subgraph_captures_outer_scope_refs (line 687) | fn analyze_nested_subgraph_captures_outer_scope_refs() {

FILE: crates/dsperse/src/slicer/autotiler.rs
  function try_pair (line 8) | fn try_pair(v: &[i64]) -> Option<[i64; 2]> {
  function try_quad (line 16) | fn try_quad(v: &[i64]) -> Option<[i64; 4]> {
  function model_opset (line 24) | pub(crate) fn model_opset(model: &ModelProto) -> i64 {
  function is_elementwise (line 34) | fn is_elementwise(op: &str) -> bool {
  type ChannelSplitParams (line 39) | pub struct ChannelSplitParams {
  type PoolParams (line 49) | struct PoolParams {
    method from_node (line 58) | fn from_node(node: &NodeProto, node_idx: usize) -> Option<PoolParams> {
  function get_pool_params (line 103) | fn get_pool_params(graph: &GraphProto) -> Option<PoolParams> {
  type ConvParams (line 112) | struct ConvParams {
    method from_node (line 124) | fn from_node(node: &NodeProto, node_idx: usize, graph: &GraphProto) ->...
  function get_conv_params (line 200) | fn get_conv_params(graph: &GraphProto) -> Option<ConvParams> {
  function effective_kernel (line 209) | fn effective_kernel(kernel: [i64; 2], dilation: [i64; 2]) -> Option<[i64...
  function conv_output_hw (line 221) | fn conv_output_hw(
  function compute_halo_size (line 249) | fn compute_halo_size(pads: [i64; 4]) -> Option<[i64; 4]> {
  function compute_min_spatial_tile (line 256) | fn compute_min_spatial_tile(kernel: [i64; 2], dilation: [i64; 2]) -> Opt...
  type SpatialKernelParams (line 261) | struct SpatialKernelParams {
  function extract_spatial_kernel_params (line 268) | fn extract_spatial_kernel_params(
  function is_spatial_tileable (line 307) | fn is_spatial_tileable(graph: &GraphProto, primary_op: &str) -> bool {
  function is_standard_conv_slice (line 319) | fn is_standard_conv_slice(graph: &GraphProto) -> Option<ConvParams> {
  function is_tileable (line 324) | fn is_tileable(graph: &GraphProto) -> bool {
  function is_channel_splittable (line 328) | fn is_channel_splittable(graph: &GraphProto) -> bool {
  function get_model_dimensions (line 335) | fn get_model_dimensions(graph: &GraphProto) -> Option<(String, String, i...
  function is_elementwise_only_slice (line 351) | fn is_elementwise_only_slice(graph: &GraphProto) -> bool {
  function find_weights_and_bias (line 358) | fn find_weights_and_bias(
  type WeightInfo (line 380) | struct WeightInfo {
  type SlicePrologue (line 385) | struct SlicePrologue<'a> {
  function extract_slice_prologue (line 392) | fn extract_slice_prologue(model: &ModelProto) -> Option<SlicePrologue<'_...
  function find_optimal_tile_size (line 423) | fn find_optimal_tile_size(
  function calculate_spatial_tile_config (line 439) | fn calculate_spatial_tile_config(
  function calculate_channel_split_config (line 462) | fn calculate_channel_split_config(
  constant CONV_TILE_BUDGET (line 489) | pub const CONV_TILE_BUDGET: i64 = 512;
  constant POOL_TILE_BUDGET (line 490) | pub const POOL_TILE_BUDGET: i64 = 1024;
  function detect_tiling_needs (line 492) | pub fn detect_tiling_needs(
  constant ELEMENTWISE_SEGMENT_SIZE (line 605) | pub const ELEMENTWISE_SEGMENT_SIZE: i64 = 1024;
  function elementwise_segment_size (line 607) | fn elementwise_segment_size() -> i64 {
  function detect_elementwise_fixed_segments (line 615) | fn detect_elementwise_fixed_segments(graph: &GraphProto) -> Option<Tilin...
  constant MAX_ESTIMATED_CONSTRAINTS (line 671) | pub const MAX_ESTIMATED_CONSTRAINTS: u64 = 750_000;
  function smallest_divisor_at_least (line 679) | fn smallest_divisor_at_least(dim: usize, target: usize) -> Option<usize> {
  type DimSplitDetection (line 688) | pub struct DimSplitDetection {
  function estimate_slice_constraints (line 704) | pub fn estimate_slice_constraints(nodes: &[NodeProto], shapes: &HashMap<...
  function detect_dim_split (line 730) | pub fn detect_dim_split(
  type TilingDetection (line 1126) | pub enum TilingDetection {
  type SpatialTileGeometry (line 1164) | struct SpatialTileGeometry {
  function compute_spatial_tile_geometry (line 1173) | fn compute_spatial_tile_geometry(
  type TileModelSpec (line 1230) | struct TileModelSpec {
  function save_tile_model (line 1238) | fn save_tile_model(
  function create_tile_slice (line 1263) | pub fn create_tile_slice(
  function create_pool_tile_slice (line 1371) | pub fn create_pool_tile_slice(
  function integrate_extra_ops (line 1445) | fn integrate_extra_ops(
  function create_channel_group_slice (line 1539) | fn create_channel_group_slice(
  function i64_to_usize (line 1638) | fn i64_to_usize(val: i64, ctx: &str, name: &str) -> Result<usize> {
  function checked_dim_product (line 1644) | fn checked_dim_product(factors: &[usize]) -> Result<usize> {
  function slice_weights (line 1654) | fn slice_weights(weights: &WeightInfo, c_start: usize, c_end: usize) -> ...
  function save_conv_bias (line 1714) | fn save_conv_bias(
  function apply_channel_splitting (line 1738) | pub fn apply_channel_splitting(
  function create_dim_split_template (line 1889) | pub fn create_dim_split_template(
  function create_matmul_dim_template (line 1910) | fn create_matmul_dim_template(
  function check_axis_separable (line 2049) | fn check_axis_separable(
  function create_generic_dim_template (line 2118) | fn create_generic_dim_template(
  function create_elementwise_tile_slice (line 2362) | pub fn create_elementwise_tile_slice(
  type TileSliceResult (line 2503) | pub struct TileSliceResult {
  function halo_symmetric_pads (line 2513) | fn halo_symmetric_pads() {
  function halo_asymmetric_pads (line 2518) | fn halo_asymmetric_pads() {
  function halo_zero_pads (line 2523) | fn halo_zero_pads() {
  function halo_negative_pads_rejected (line 2528) | fn halo_negative_pads_rejected() {
  function halo_mixed_pads (line 2533) | fn halo_mixed_pads() {
  function min_tile_3x3_no_dilation (line 2538) | fn min_tile_3x3_no_dilation() {
  function min_tile_5x5_no_dilation (line 2543) | fn min_tile_5x5_no_dilation() {
  function min_tile_3x3_dilation_2 (line 2548) | fn min_tile_3x3_dilation_2() {
  function min_tile_1x1 (line 2554) | fn min_tile_1x1() {
  function optimal_tile_exact_divisor (line 2559) | fn optimal_tile_exact_divisor() {
  function optimal_tile_no_exact_divisor_falls_back (line 2564) | fn optimal_tile_no_exact_divisor_falls_back() {
  function optimal_tile_target_equals_spatial (line 2569) | fn optimal_tile_target_equals_spatial() {
  function optimal_tile_min_exceeds_target (line 2574) | fn optimal_tile_min_exceeds_target() {
  function optimal_tile_stride_constraint (line 2579) | fn optimal_tile_stride_constraint() {
  function optimal_tile_no_valid_stride_divisor (line 2585) | fn optimal_tile_no_valid_stride_divisor() {
  function checked_dim_product_normal (line 2590) | fn checked_dim_product_normal() {
  function checked_dim_product_empty (line 2595) | fn checked_dim_product_empty() {
  function checked_dim_product_overflow (line 2600) | fn checked_dim_product_overflow() {
  function checked_dim_product_single (line 2605) | fn checked_dim_product_single() {
  function slice_weights_basic (line 2610) | fn slice_weights_basic() {
  function slice_weights_single_channel (line 2625) | fn slice_weights_single_channel() {
  function slice_weights_start_ge_end (line 2636) | fn slice_weights_start_ge_end() {
  function slice_weights_end_exceeds_c_in (line 2645) | fn slice_weights_end_exceeds_c_in() {
  function slice_weights_insufficient_dims (line 2654) | fn slice_weights_insufficient_dims() {
  function slice_weights_data_length_mismatch (line 2663) | fn slice_weights_data_length_mismatch() {
  function elementwise_ops_recognized (line 2672) | fn elementwise_ops_recognized() {
  function non_elementwise_ops_rejected (line 2680) | fn non_elementwise_ops_rejected() {
  function spatial_tile_config_already_fits (line 2688) | fn spatial_tile_config_already_fits() {
  function spatial_tile_config_min_tile_too_large (line 2695) | fn spatial_tile_config_min_tile_too_large() {
  function spatial_tile_config_finds_tile (line 2702) | fn spatial_tile_config_finds_tile() {
  function channel_split_config_basic (line 2712) | fn channel_split_config_basic() {
  function channel_split_config_zero_dims (line 2722) | fn channel_split_config_zero_dims() {
  function channel_split_config_fits_without_splitting (line 2728) | fn channel_split_config_fits_without_splitting() {
  function detect_tiling_none_without_tile_size (line 2733) | fn detect_tiling_none_without_tile_size() {
  function detect_tiling_none_empty_graph (line 2742) | fn detect_tiling_none_empty_graph() {
  function effective_kernel_overflow (line 2751) | fn effective_kernel_overflow() {
  function effective_kernel_sub_underflow (line 2757) | fn effective_kernel_sub_underflow() {
  function effective_kernel_valid (line 2762) | fn effective_kernel_valid() {
  function conv_output_hw_zero_stride (line 2769) | fn conv_output_hw_zero_stride() {
  function conv_output_hw_kernel_exceeds_input (line 2781) | fn conv_output_hw_kernel_exceeds_input() {
  function conv_output_hw_overflow_pads (line 2789) | fn conv_output_hw_overflow_pads() {
  function conv_output_hw_valid (line 2797) | fn conv_output_hw_valid() {
  function compute_halo_size_negative_rejected (line 2809) | fn compute_halo_size_negative_rejected() {
  function compute_min_spatial_tile_overflow (line 2814) | fn compute_min_spatial_tile_overflow() {
  function slice_weights_full_range_is_identity (line 2819) | fn slice_weights_full_range_is_identity() {
  function detect_dim_split_gemm_trans_b (line 2831) | fn detect_dim_split_gemm_trans_b() {
  function detect_dim_split_matmul_no_trans (line 2865) | fn detect_dim_split_matmul_no_trans() {
  function detect_dim_split_k_chunks_saturate_budget (line 2894) | fn detect_dim_split_k_chunks_saturate_budget() {
  function detect_dim_split_single_row_with_k_chunking (line 2925) | fn detect_dim_split_single_row_with_k_chunking() {
  function detect_dim_split_skips_single_row_single_chunk (line 2949) | fn detect_dim_split_skips_single_row_single_chunk() {
  function detect_dim_split_declines_infeasible_n (line 2981) | fn detect_dim_split_declines_infeasible_n() {
  function detect_dim_split_skips_non_terminal_matmul (line 3008) | fn detect_dim_split_skips_non_terminal_matmul() {
  function detect_dim_split_picks_terminal_matmul_after_consumed_one (line 3044) | fn detect_dim_split_picks_terminal_matmul_after_consumed_one() {
  function detect_dim_split_skips_gemm_trans_a (line 3078) | fn detect_dim_split_skips_gemm_trans_a() {
  function detect_dim_split_skips_gemm_with_bias (line 3105) | fn detect_dim_split_skips_gemm_with_bias() {
  function create_matmul_dim_template_uses_info_weight_name (line 3140) | fn create_matmul_dim_template_uses_info_weight_name() {
  function create_matmul_dim_template_disambiguates_shared_weight (line 3203) | fn create_matmul_dim_template_disambiguates_shared_weight() {
  function make_maxpool_node (line 3267) | fn make_maxpool_node(
  function pool_params_valid (line 3290) | fn pool_params_valid() {
  function pool_params_rejects_ceil_mode (line 3300) | fn pool_params_rejects_ceil_mode() {
  function pool_params_accepts_ceil_mode_zero (line 3306) | fn pool_params_accepts_ceil_mode_zero() {
  function pool_params_rejects_auto_pad (line 3312) | fn pool_params_rejects_auto_pad() {
  function pool_params_rejects_non_maxpool (line 3332) | fn pool_params_rejects_non_maxpool() {
  function make_elementwise_model (line 3342) | fn make_elementwise_model(op: &str, shape: &[i64]) -> ModelProto {
  function fixed_segments_too_small_returns_none (line 3351) | fn fixed_segments_too_small_returns_none() {
  function fixed_segments_detects_large_tensor (line 3357) | fn fixed_segments_detects_large_tensor() {
  function fixed_segments_rejects_zero_dim (line 3381) | fn fixed_segments_rejects_zero_dim() {
  function fixed_segments_rejects_non_elementwise (line 3387) | fn fixed_segments_rejects_non_elementwise() {
  function create_pool_tile_slice_valid (line 3402) | fn create_pool_tile_slice_valid() {
  function create_pool_tile_slice_rejects_zero_tile (line 3416) | fn create_pool_tile_slice_rejects_zero_tile() {
  function create_pool_tile_slice_no_pool_node (line 3427) | fn create_pool_tile_slice_no_pool_node() {
  function estimate_slice_constraints_clamps_symbolic_dimensions (line 3439) | fn estimate_slice_constraints_clamps_symbolic_dimensions() {

FILE: crates/dsperse/src/slicer/combiner.rs
  function materialize_combined_model (line 8) | pub fn materialize_combined_model(
  constant ONNX_STRING_DATATYPE (line 90) | const ONNX_STRING_DATATYPE: i32 = 8;
  constant NON_NUMERIC_TENSOR_TYPES (line 91) | const NON_NUMERIC_TENSOR_TYPES: &[i32] = &[ONNX_STRING_DATATYPE];
  function resolve_value_info (line 93) | fn resolve_value_info(
  function ensure_combined_materialized (line 126) | pub fn ensure_combined_materialized(
  function materialize_combined_to_disk (line 137) | pub fn materialize_combined_to_disk(
  function make_test_model (line 174) | fn make_test_model(
  function bool_outputs_included_in_combined_model (line 237) | fn bool_outputs_included_in_combined_model() {
  function combined_model_has_intermediate_outputs (line 275) | fn combined_model_has_intermediate_outputs() {
  function combined_model_to_disk_roundtrip (line 300) | fn combined_model_to_disk_roundtrip() {
  function ensure_combined_is_idempotent (line 335) | fn ensure_combined_is_idempotent() {

FILE: crates/dsperse/src/slicer/layernorm_fuse.rs
  function fuse_inline_layernorms (line 7) | pub fn fuse_inline_layernorms(
  type MatchedPattern (line 103) | struct MatchedPattern {
  function try_match_layernorm (line 116) | fn try_match_layernorm(
  function resolve_shape (line 265) | fn resolve_shape(
  function reduce_axes (line 283) | fn reduce_axes(node: &NodeProto, initializers: &HashMap<String, TensorPr...
  function get_keepdims (line 300) | fn get_keepdims(node: &NodeProto) -> Option<i64> {
  function find_unique_consumer (line 307) | fn find_unique_consumer(
  function find_square_consumer (line 323) | fn find_square_consumer(
  function pow_exponent_is_two (line 358) | fn pow_exponent_is_two(name: &str, initializers: &HashMap<String, Tensor...
  function extract_binary_const_scalar (line 372) | fn extract_binary_const_scalar(
  function other_input_if_init (line 392) | fn other_input_if_init(
  type ReplacementShapes (line 412) | type ReplacementShapes = Vec<(String, Vec<i64>)>;
  type Replacement (line 413) | type Replacement = (Vec<NodeProto>, Vec<TensorProto>, ReplacementShapes);
  function emit_replacement (line 415) | fn emit_replacement(
  function materialize_1d_initializer (line 502) | fn materialize_1d_initializer(
  function const_vector (line 538) | fn const_vector(name: &str, len: usize, fill: f32) -> TensorProto {
  function make_f32_vector (line 542) | fn make_f32_vector(name: &str, vals: &[f32]) -> TensorProto {
  function normalize_axis (line 552) | fn normalize_axis(axis: i64, rank: usize) -> usize {
  function int_attr (line 560) | fn int_attr(name: &str, v: i64) -> AttributeProto {
  function float_attr (line 569) | fn float_attr(name: &str, v: f32) -> AttributeProto {
  function int_list_attr (line 578) | fn int_list_attr(name: &str, vals: &[i64]) -> AttributeProto {

FILE: crates/dsperse/src/slicer/materializer.rs
  constant MAX_BACKWARD_DEPTH (line 10) | const MAX_BACKWARD_DEPTH: usize = 64;
  function resolve_shape_backward (line 12) | fn resolve_shape_backward(
  function resolve_shape_backward_inner (line 20) | fn resolve_shape_backward_inner(
  function materialize_slice_model (line 78) | pub fn materialize_slice_model(
  function materialize_slice_to_disk (line 188) | pub fn materialize_slice_to_disk(
  function ensure_slice_materialized (line 205) | pub fn ensure_slice_materialized(
  function materialize_tiling_artifacts (line 272) | fn materialize_tiling_artifacts(
  function ensure_all_slices_materialized (line 392) | pub fn ensure_all_slices_materialized(slices_dir: &Path, metadata: &Mode...
  function apply_traced_shapes (line 400) | fn apply_traced_shapes(mut model: ModelProto, shapes: &HashMap<String, V...
  function compute_future_dependencies (line 469) | fn compute_future_dependencies(
  type SegmentQuery (line 517) | struct SegmentQuery<'a> {
  type ShapeContext (line 524) | struct ShapeContext<'a> {
  function resolve_elem_type (line 536) | fn resolve_elem_type(&self, name: &str) -> i32 {
  function get_segment_details (line 554) | fn get_segment_details(
  function build_node_output_types (line 638) | pub fn build_node_output_types(graph: &GraphProto) -> HashMap<String, i3...
  function extract_dslice_archive (line 834) | fn extract_dslice_archive(archive: &Path, dest: &Path) -> Result<()> {
  function cleanup_extracted_slice (line 868) | pub fn cleanup_extracted_slice(slices_dir: &Path, slice_id: &str) {

FILE: crates/dsperse/src/slicer/mod.rs
  constant UNARY_ACTIVATIONS (line 15) | pub(crate) const UNARY_ACTIVATIONS: &[&str] = &[
  constant UNARY_STRUCTURAL (line 32) | pub(crate) const UNARY_STRUCTURAL: &[&str] = &["Cast", "Not", "Identity"...
  constant BINARY_ARITHMETIC (line 34) | pub(crate) const BINARY_ARITHMETIC: &[&str] = &["Add", "Sub", "Mul", "Di...
  constant NORMALIZATION_OPS (line 36) | pub(crate) const NORMALIZATION_OPS: &[&str] =
  constant LAYOUT_OPS (line 39) | pub(crate) const LAYOUT_OPS: &[&str] = &[
  constant CONTROL_FLOW_OPS (line 48) | pub(crate) const CONTROL_FLOW_OPS: &[&str] = &["Loop", "If", "Scan"];
  function is_control_flow (line 50) | pub(crate) fn is_control_flow(op: &str) -> bool {
  function collect_subgraph_outer_refs (line 54) | pub(crate) fn collect_subgraph_outer_refs(
  function collect_outer_refs_recursive (line 71) | fn collect_outer_refs_recursive(
  function is_shape_preserving (line 114) | pub(crate) fn is_shape_preserving(op: &str) -> bool {
  function is_slice_passthrough (line 134) | pub(crate) fn is_slice_passthrough(op: &str) -> bool {
  function is_elementwise (line 138) | pub(crate) fn is_elementwise(op: &str) -> bool {
  function is_binary_arithmetic (line 142) | pub(crate) fn is_binary_arithmetic(op: &str) -> bool {
  function build_segment_ranges (line 146) | pub(crate) fn build_segment_ranges(

FILE: crates/dsperse/src/slicer/onnx_fold.rs
  function fold_constant_nodes (line 7) | pub fn fold_constant_nodes(model: &mut ModelProto) -> HashSet<String> {
  function remove_identity_nodes (line 81) | pub fn remove_identity_nodes(graph: &mut GraphProto) -> usize {
  function eliminate_dead_nodes (line 149) | pub fn eliminate_dead_nodes(graph: &mut GraphProto) -> usize {
  function propagate_constants_with_shapes (line 182) | pub fn propagate_constants_with_shapes(
  function propagate_constants (line 239) | pub(crate) fn propagate_constants(graph: &mut GraphProto) -> HashSet<Str...
  function eval_const_node (line 338) | fn eval_const_node(
  function eval_expand (line 399) | fn eval_expand(inputs: &[&TensorProto], out_name: &str) -> Option<Vec<(S...
  function eval_tile (line 439) | fn eval_tile(inputs: &[&TensorProto], out_name: &str) -> Option<Vec<(Str...
  function eval_constant_of_shape (line 513) | fn eval_constant_of_shape(
  function eval_where (line 561) | fn eval_where(inputs: &[&TensorProto], out_name: &str) -> Option<Vec<(St...
  function eval_range (line 613) | fn eval_range(inputs: &[&TensorProto], out_name: &str) -> Option<Vec<(St...
  function eval_cmp (line 681) | fn eval_cmp(
  function eval_not (line 729) | fn eval_not(input: &TensorProto, out_name: &str) -> Option<Vec<(String, ...
  function eval_logical (line 744) | fn eval_logical(
  function eval_transpose (line 775) | fn eval_transpose(
  function eval_resize (line 866) | fn eval_resize(
  type ResizeMode (line 1072) | enum ResizeMode {
  function nearest_idx (line 1078) | fn nearest_idx(s: f32, dim: usize) -> usize {
  function sample_linear_2d (line 1091) | fn sample_linear_2d(
  function sample_cubic_2d (line 1119) | fn sample_cubic_2d(
  function cubic_weights (line 1181) | fn cubic_weights(t: f32, a: f32) -> [f32; 4] {
  function cubic_kernel (line 1194) | fn cubic_kernel(x: f32, a: f32) -> f32 {
  function clamp_axis (line 1205) | fn clamp_axis(i: isize, dim: usize) -> (bool, usize) {
  type ReduceOp (line 1216) | enum ReduceOp {
  function eval_reduce (line 1223) | fn eval_reduce(
  function eval_cast (line 1333) | fn eval_cast(
  function eval_unary_f32 (line 1418) | fn eval_unary_f32(
  function eval_binary_f32 (line 1432) | fn eval_binary_f32(
  function broadcast_shape (line 1471) | fn broadcast_shape(a_dims: &[i64], b_dims: &[i64]) -> Option<Vec<i64>> {
  function broadcast_index (line 1498) | fn broadcast_index(out_idx: usize, out_dims: &[i64], src_dims: &[i64]) -...
  constant MAX_BROADCAST_ELEMENTS (line 1514) | const MAX_BROADCAST_ELEMENTS: usize = 100_000_000;
  function broadcast_total (line 1516) | fn broadcast_total(out_dims: &[i64]) -> Option<usize> {
  function broadcast_binary (line 1528) | fn broadcast_binary(
  function broadcast_binary_i64 (line 1546) | fn broadcast_binary_i64(
  function eval_reshape (line 1564) | fn eval_reshape(
  function eval_squeeze (line 1614) | fn eval_squeeze(
  function eval_unsqueeze (line 1669) | fn eval_unsqueeze(
  function eval_shape (line 1709) | fn eval_shape(
  function eval_gather (line 1756) | fn eval_gather(
  function eval_slice (line 1826) | fn eval_slice(inputs: &[&TensorProto], out_name: &str) -> Option<Vec<(St...
  function eval_scatter_nd (line 1996) | fn eval_scatter_nd(inputs: &[&TensorProto], out_name: &str) -> Option<Ve...
  function eval_split (line 2089) | fn eval_split(
  function eval_concat (line 2197) | fn eval_concat(
  function make_f32_tensor (line 2305) | fn make_f32_tensor(name: &str, dims: &[i64], vals: &[f32], target_type: ...
  type ConvBnFusion (line 2345) | struct ConvBnFusion {
  function fuse_conv_batchnorm (line 2367) | pub fn fuse_conv_batchnorm(graph: &mut GraphProto) -> usize {

FILE: crates/dsperse/src/slicer/onnx_proto.rs
  function load_model (line 23) | pub fn load_model(path: &Path) -> Result<ModelProto> {
  function canonicalize_node_attributes (line 29) | fn canonicalize_node_attributes(nodes: &mut [NodeProto]) {
  function save_model (line 43) | pub fn save_model(model: &ModelProto, path: &Path) -> Result<()> {
  function make_tensor_value_info (line 58) | pub fn make_tensor_value_info(name: &str, elem_type: i32, shape: &[i64])...
  function make_tensor (line 85) | pub fn make_tensor(name: &str, elem_type: i32, dims: &[i64], float_data:...
  function make_node (line 95) | pub fn make_node(
  function make_graph (line 115) | pub fn make_graph(
  function make_model (line 132) | pub fn make_model(graph: GraphProto, opset_version: i64) -> ModelProto {
  function make_attribute_ints (line 144) | pub fn make_attribute_ints(name: &str, ints: &[i64]) -> AttributeProto {
  function make_attribute_int (line 153) | pub fn make_attribute_int(name: &str, val: i64) -> AttributeProto {
  function get_attribute_ints (line 162) | pub fn get_attribute_ints(node: &NodeProto, name: &str) -> Option<Vec<i6...
  function get_attribute_int (line 169) | pub fn get_attribute_int(node: &NodeProto, name: &str) -> Option<i64> {
  function get_attribute_float (line 173) | pub fn get_attribute_float(node: &NodeProto, name: &str) -> Option<f32> {
  function make_attribute_float (line 177) | pub fn make_attribute_float(name: &str, val: f32) -> AttributeProto {
  function tensor_to_i64 (line 186) | pub fn tensor_to_i64(tensor: &TensorProto) -> Vec<i64> {
  function tensor_to_f32 (line 211) | pub fn tensor_to_f32(tensor: &TensorProto) -> Vec<f32> {
  function tensor_to_f64 (line 290) | pub fn tensor_to_f64(tensor: &TensorProto) -> Vec<f64> {
  function build_initializer_map (line 351) | pub fn build_initializer_map(graph: &GraphProto) -> HashMap<String, &Ten...
  function build_value_info_map (line 359) | pub fn build_value_info_map(graph: &GraphProto) -> HashMap<String, &Valu...
  constant FLOAT (line 374) | pub const FLOAT: i32 = 1;
  constant INT64 (line 375) | pub const INT64: i32 = 7;
  constant DOUBLE (line 376) | pub const DOUBLE: i32 = 11;
  constant INT32 (line 377) | pub const INT32: i32 = 6;
  constant FLOAT16 (line 378) | pub const FLOAT16: i32 = 10;
  constant BOOL (line 379) | pub const BOOL: i32 = 9;
  function is_paddable_shape (line 382) | fn is_paddable_shape(target: &[i64], donor: &[i64]) -> bool {
  function validate_initializer_compatibility (line 390) | pub fn validate_initializer_compatibility(
  function pad_float_data (line 429) | fn pad_float_data(
  function pad_raw_data_f32 (line 449) | fn pad_raw_data_f32(raw: &[u8], target_dims: &[i64], donor_dims: &[i64],...
  function replace_initializers (line 458) | pub fn replace_initializers(
  function build_patched_onnx (line 513) | pub fn build_patched_onnx(
  function model_opset_version (line 525) | fn model_opset_version(model: &ModelProto) -> i64 {
  function min_opset_for_op (line 534) | fn min_opset_for_op(op_type: &str) -> Option<i64> {
  function normalize_opset (line 543) | pub fn normalize_opset(model: &mut ModelProto) -> usize {
  function normalize_for_circuit_backend (line 613) | pub fn normalize_for_circuit_backend(model: &mut ModelProto) -> usize {
  function fix_zero_dims (line 633) | fn fix_zero_dims(graph: &mut GraphProto) -> usize {
  function flatten_matmul_inputs (line 673) | fn flatten_matmul_inputs(graph: &mut GraphProto) -> usize {
  function materialize_reshape_targets (line 870) | fn materialize_reshape_targets(graph: &mut GraphProto) -> usize {

FILE: crates/dsperse/src/slicer/onnx_shapes.rs
  function shape_from_value_info (line 3) | pub fn shape_from_value_info(vi: &ValueInfoProto) -> Option<Vec<i64>> {
  function elem_type_from_value_info (line 19) | pub fn elem_type_from_value_info(vi: &ValueInfoProto) -> Option<i32> {
  function vi_shape (line 27) | pub fn vi_shape(vi: &ValueInfoProto) -> Vec<i64> {
  function set_vi_shape (line 46) | pub fn set_vi_shape(vi: &mut ValueInfoProto, shape: &[i64]) {
  function strip_symbolic_value_info (line 62) | pub fn strip_symbolic_value_info(model: &mut ModelProto) -> usize {
  function resolve_dynamic_input_shapes (line 114) | pub fn resolve_dynamic_input_shapes(

FILE: crates/dsperse/src/slicer/onnx_slicer.rs
  function slice_model (line 14) | pub fn slice_model(
  function build_slice_metadata (line 254) | fn build_slice_metadata(
  function build_shape_from_traced (line 411) | fn build_shape_from_traced(
  function determine_slice_points (line 436) | fn determine_slice_points(
  function optimize_points (line 471) | fn optimize_points(
  function is_spatial_primary (line 486) | fn is_spatial_primary(op: &str) -> bool {
  function isolate_expensive_ops (line 498) | fn isolate_expensive_ops(
  function isolate_conv (line 600) | fn isolate_conv(points: &[usize], analysis: &AnalysisResult) -> Vec<usiz...
  function optimize_jstprove_slices (line 636) | fn optimize_jstprove_slices(
  function optimize_for_tiling (line 651) | fn optimize_for_tiling(points: &[usize], analysis: &AnalysisResult) -> V...
  function filter_constant_only_slices (line 669) | fn filter_constant_only_slices(points: &[usize], analysis: &AnalysisResu...
  function merge_control_flow_segments (line 702) | fn merge_control_flow_segments(points: &[usize], analysis: &AnalysisResu...
  function complete_slice_points (line 744) | fn complete_slice_points(points: &mut Vec<usize>, analysis: &AnalysisRes...
  function broadcast_shapes (line 757) | pub(crate) fn broadcast_shapes(shapes: &[&Vec<i64>]) -> Option<Vec<i64>> {
  function make_analysis_with_params (line 782) | fn make_analysis_with_params(nodes: Vec<(&str, usize, &str, bool)>) -> A...
  constant TEST_OPS (line 823) | const TEST_OPS: &[&str] = &["Conv", "Gemm", "MatMul"];
  function complete_slice_points_adds_boundaries (line 826) | fn complete_slice_points_adds_boundaries() {
  function complete_slice_points_already_complete (line 840) | fn complete_slice_points_already_complete() {
  function complete_slice_points_deduplicates (line 849) | fn complete_slice_points_deduplicates() {
  function isolate_conv_inserts_boundaries (line 857) | fn isolate_conv_inserts_boundaries() {
  function isolate_conv_no_convs (line 874) | fn isolate_conv_no_convs() {
  function isolate_maxpool_gets_boundary (line 883) | fn isolate_maxpool_gets_boundary() {
  function optimize_jstprove_slices_splits_at_boundary (line 892) | fn optimize_jstprove_slices_splits_at_boundary() {
  function optimize_jstprove_slices_all_supported (line 905) | fn optimize_jstprove_slices_all_supported() {
  function optimize_for_tiling_maxpool_stays_grouped (line 914) | fn optimize_for_tiling_maxpool_stays_grouped() {
  function optimize_for_tiling_splits_at_non_tileable (line 927) | fn optimize_for_tiling_splits_at_non_tileable() {
  function optimize_for_tiling_relu_after_non_tileable_kept (line 940) | fn optimize_for_tiling_relu_after_non_tileable_kept() {
  function filter_constant_only_slices_removes_constant_segments (line 952) | fn filter_constant_only_slices_removes_constant_segments() {
  function filter_constant_only_slices_keeps_non_constant (line 966) | fn filter_constant_only_slices_keeps_non_constant() {
  function filter_constant_only_slices_empty_points (line 975) | fn filter_constant_only_slices_empty_points() {
  function determine_slice_points_includes_parameterized_nodes (line 982) | fn determine_slice_points_includes_parameterized_nodes() {
  function determine_slice_points_with_tile_size (line 999) | fn determine_slice_points_with_tile_size() {
  type NodeSpec (line 1013) | type NodeSpec<'a> = (&'a str, usize, &'a str, bool, Vec<&'a str>, Vec<&'...
  function make_analysis_with_deps (line 1015) | fn make_analysis_with_deps(nodes: Vec<NodeSpec<'_>>) -> AnalysisResult {
  function merge_control_flow_removes_boundary_between_producer_and_loop (line 1057) | fn merge_control_flow_removes_boundary_between_producer_and_loop() {
  function merge_control_flow_preserves_unrelated_boundaries (line 1095) | fn merge_control_flow_preserves_unrelated_boundaries() {
  function merge_control_flow_no_control_flow_ops (line 1141) | fn merge_control_flow_no_control_flow_ops() {
  function isolate_conv_absorbs_reshape_then_boundaries_on_matmul (line 1161) | fn isolate_conv_absorbs_reshape_then_boundaries_on_matmul() {
  function isolate_conv_absorbs_transpose_chain_then_boundaries_on_matmul (line 1200) | fn isolate_conv_absorbs_transpose_chain_then_boundaries_on_matmul() {
  function isolate_conv_stops_when_passthrough_consumes_external_input (line 1243) | fn isolate_conv_stops_when_passthrough_consumes_external_input() {

FILE: crates/dsperse/src/slicer/self_div_rewrite.rs
  function rewrite_self_div_to_one (line 14) | pub fn rewrite_self_div_to_one(

FILE: crates/dsperse/src/slicer/trace.rs
  type TraceResult (line 7) | pub(crate) struct TraceResult {
  function fold_and_trace_via_tract (line 12) | pub(crate) fn fold_and_trace_via_tract(
  function tag_all_outputs (line 236) | fn tag_all_outputs(onnx_path: &Path, model: &ModelProto) -> Result<std::...
  function onnx_elem_type_to_datum (line 257) | fn onnx_elem_type_to_datum(onnx_type: i32) -> Option<tract_onnx::prelude...
  function datum_type_to_onnx (line 275) | fn datum_type_to_onnx(dt: tract_onnx::prelude::DatumType) -> u8 {
  type LoopBody (line 294) | struct LoopBody {
  function collect_loop_bodies (line 304) | fn collect_loop_bodies(model: &ModelProto) -> HashMap<String, LoopBody> {
  function synthesize_loop_outputs (line 377) | fn synthesize_loop_outputs(
  function resolve_absorbed_nodes (line 428) | fn resolve_absorbed_nodes(
  function resolve_body_tensor_shape (line 467) | fn resolve_body_tensor_shape(
  function resolve_body_tensor_shape_inner (line 476) | fn resolve_body_tensor_shape_inner(

FILE: crates/dsperse/src/utils/io.rs
  function read_msgpack (line 9) | pub fn read_msgpack(path: &Path) -> Result<Value> {
  function write_msgpack (line 14) | pub fn write_msgpack(path: &Path, value: &Value) -> Result<()> {
  function extract_input_data (line 22) | pub fn extract_input_data(value: &Value) -> Option<&Value> {
  function flatten_nested_list (line 29) | pub fn flatten_nested_list(value: &Value) -> Vec<f64> {
  function flatten_recursive (line 35) | fn flatten_recursive(value: &Value, out: &mut Vec<f64>) {
  function infer_shape (line 57) | pub fn infer_shape(value: &Value) -> Vec<usize> {
  function value_to_arrayd (line 71) | pub fn value_to_arrayd(value: &Value) -> Result<ArrayD<f64>> {
  function arrayd_to_value (line 97) | pub fn arrayd_to_value(arr: &ArrayD<f64>) -> Value {
  function gather_inputs_from_cache (line 116) | pub fn gather_inputs_from_cache(
  function build_msgpack_map (line 183) | pub fn build_msgpack_map(entries: Vec<(&str, Value)>) -> Value {
  function map_get_ref (line 192) | pub fn map_get_ref<'a>(value: &'a Value, key: &str) -> Option<&'a Value> {

FILE: crates/dsperse/src/utils/limits.rs
  function reject_symlink (line 6) | pub fn reject_symlink(path: &Path) -> Result<()> {
  function open_nofollow (line 19) | fn open_nofollow(path: &Path) -> Result<std::fs::File> {
  function read_checked (line 47) | pub fn read_checked(path: &Path) -> Result<Vec<u8>> {
  function read_to_string_checked (line 55) | pub fn read_to_string_checked(path: &Path) -> Result<String> {
  function reject_symlink_on_regular_file (line 68) | fn reject_symlink_on_regular_file() {
  function reject_symlink_on_symlink (line 75) | fn reject_symlink_on_symlink() {
  function read_checked_normal (line 85) | fn read_checked_normal() {
  function read_to_string_checked_normal (line 93) | fn read_to_string_checked_normal() {

FILE: crates/dsperse/src/utils/metadata.rs
  function load_run_metadata (line 6) | pub fn load_run_metadata(path: &Path) -> Result<RunMetadata> {
  function save_run_metadata (line 11) | pub fn save_run_metadata(path: &Path, meta: &RunMetadata) -> Result<()> {

FILE: crates/dsperse/src/utils/paths.rs
  constant METADATA_FILE (line 5) | pub const METADATA_FILE: &str = "metadata.msgpack";
  constant INPUT_FILE (line 6) | pub const INPUT_FILE: &str = "input.msgpack";
  constant OUTPUT_FILE (line 7) | pub const OUTPUT_FILE: &str = "output.msgpack";
  constant WITNESS_FILE (line 8) | pub const WITNESS_FILE: &str = "witness.bin";
  constant PROOF_FILE (line 9) | pub const PROOF_FILE: &str = "proof.bin";
  function resolve_relative_path (line 11) | pub fn resolve_relative_path(base: &Path, relative: &str) -> Result<Path...
  function relativize_path (line 36) | pub fn relativize_path(path: &Path, base: &Path) -> String {
  function slice_dir_path (line 42) | pub fn slice_dir_path(root: &Path, index: usize) -> PathBuf {
  function find_metadata_path (line 46) | pub fn find_metadata_path(dir: &Path) -> Option<PathBuf> {
  function resolve_relative_normal_path (line 63) | fn resolve_relative_normal_path() {
  function resolve_relative_rejects_absolute (line 70) | fn resolve_relative_rejects_absolute() {
  function resolve_relative_rejects_parent_dir (line 76) | fn resolve_relative_rejects_parent_dir() {
  function resolve_relative_rejects_embedded_parent (line 82) | fn resolve_relative_rejects_embedded_parent() {
  function resolve_relative_allows_current_dir (line 88) | fn resolve_relative_allows_current_dir() {
  function resolve_relative_empty_string (line 95) | fn resolve_relative_empty_string() {

FILE: crates/dsperse/src/version.rs
  type DsperseVersion (line 4) | pub struct DsperseVersion {
  function dsperse_artifact_version (line 11) | pub fn dsperse_artifact_version() -> DsperseVersion {

FILE: crates/dsperse/tests/integration_slice.rs
  function test_models_dir (line 5) | fn test_models_dir() -> &'static Path {
  function slice_net_model (line 10) | fn slice_net_model() {
  function slice_doom_model (line 53) | fn slice_doom_model() {
  function slice_net_model_remainder (line 83) | fn slice_net_model_remainder() {
  function slice_with_tile_size (line 108) | fn slice_with_tile_size() {
  function slice_metadata_roundtrip_from_disk (line 135) | fn slice_metadata_roundtrip_from_disk() {
  function materialize_from_manifest (line 166) | fn materialize_from_manifest() {
  function resolve_onnx_points_to_existing_file_after_materialize (line 210) | fn resolve_onnx_points_to_existing_file_after_materialize() {

FILE: crates/dsperse/tests/schema_roundtrip.rs
  function model_metadata_roundtrip (line 6) | fn model_metadata_roundtrip() {
  function run_metadata_roundtrip (line 120) | fn run_metadata_roundtrip() {
  function execution_info_with_tiles (line 181) | fn execution_info_with_tiles() {
  function channel_split_roundtrip (line 200) | fn channel_split_roundtrip() {
  function compilation_files_aliases (line 233) | fn compilation_files_aliases() {
  function backend_serde (line 248) | fn backend_serde() {
  function tensor_shape_i64_deserialization (line 266) | fn tensor_shape_i64_deserialization() {
  function tensor_shape_rejects_non_integer (line 283) | fn tensor_shape_rejects_non_integer() {
  function run_slice_metadata_i64_shapes (line 290) | fn run_slice_metadata_i64_shapes() {
  function resolve_onnx_uses_relative_path_not_absolute (line 314) | fn resolve_onnx_uses_relative_path_not_absolute() {

FILE: crates/dsperse/tests/sn2_contract.rs
  function make_value_array (line 6) | fn make_value_array(vals: &[f64]) -> Value {
  function make_value_2d (line 10) | fn make_value_2d(rows: &[&[f64]]) -> Value {
  function make_value_3d (line 14) | fn make_value_3d(planes: &[&[&[f64]]]) -> Value {
  function make_value_4d (line 18) | fn make_value_4d(blocks: &[&[&[&[f64]]]]) -> Value {
  function value_arrayd_roundtrip_1d (line 23) | fn value_arrayd_roundtrip_1d() {
  function value_arrayd_roundtrip_2d (line 35) | fn value_arrayd_roundtrip_2d() {
  function value_arrayd_roundtrip_3d (line 47) | fn value_arrayd_roundtrip_3d() {
  function value_arrayd_roundtrip_4d (line 59) | fn value_arrayd_roundtrip_4d() {
  function value_arrayd_full_roundtrip_preserves_values (line 71) | fn value_arrayd_full_roundtrip_preserves_values() {
  function extract_input_data_key_precedence (line 83) | fn extract_input_data_key_precedence() {
  function extract_input_data_fallback_to_input (line 95) | fn extract_input_data_fallback_to_input() {
  function extract_input_data_fallback_to_data (line 106) | fn extract_input_data_fallback_to_data() {
  function extract_input_data_fallback_to_inputs (line 116) | fn extract_input_data_fallback_to_inputs() {
  function extract_input_data_returns_none_for_unrecognized_keys (line 126) | fn extract_input_data_returns_none_for_unrecognized_keys() {
  function slice_dir_path_formats_correctly (line 135) | fn slice_dir_path_formats_correctly() {
  function arrayd_to_value_then_extract_input_data_integration (line 152) | fn arrayd_to_value_then_extract_input_data_integration() {

FILE: python/dsperse/cli.py
  function main (line 4) | def main():
Condensed preview — 72 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,001K chars).
[
  {
    "path": ".cargo/audit.toml",
    "chars": 123,
    "preview": "[advisories]\nignore = [\n    \"RUSTSEC-2026-0009\", # time crate DoS via RFC 2822 parsing — transitive dep, not user-facing"
  },
  {
    "path": ".cargo/config.toml",
    "chars": 32,
    "preview": "[net]\ngit-fetch-with-cli = true\n"
  },
  {
    "path": ".github/workflows/integration_tests.yml",
    "chars": 2133,
    "preview": "name: Integration Tests\n\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    branches:\n      - main\n\nconcurrency:"
  },
  {
    "path": ".github/workflows/publish.yml",
    "chars": 6615,
    "preview": "name: Build and Publish to PyPI\n\non:\n  push:\n    tags:\n      - \"v*\"\n  pull_request:\n  workflow_dispatch:\n\nconcurrency:\n "
  },
  {
    "path": ".gitignore",
    "chars": 1000,
    "preview": "# macOS system files\n.DS_Store\n.DS_*\ntests/models/run\n# macOS metadata\n._*\n\n# Python cache\n__pycache__/\n*.py[cod]\n\n# Env"
  },
  {
    "path": "Cargo.toml",
    "chars": 1096,
    "preview": "[workspace]\nmembers = [\"crates/dsperse\"]\nresolver = \"2\"\n\n[workspace.package]\nedition = \"2024\"\n\n[workspace.dependencies]\n"
  },
  {
    "path": "LICENSE",
    "chars": 1159,
    "preview": "Copyright (c) 2025 Inference Labs Inc.\n\nSource Access Grant\nYou may access, view, study, and modify the source code of t"
  },
  {
    "path": "README.md",
    "chars": 5176,
    "preview": "# DSperse: Community Edition\n\n[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?style=flat-square&logo=gith"
  },
  {
    "path": "crates/dsperse/Cargo.toml",
    "chars": 1048,
    "preview": "[package]\nname = \"dsperse\"\nversion = \"0.0.0\"\nedition.workspace = true\n\n[features]\ndefault = []\npython = [\"dep:pyo3\", \"py"
  },
  {
    "path": "crates/dsperse/benches/serialization.rs",
    "chars": 7227,
    "preview": "use std::collections::HashMap;\n\nuse criterion::{Criterion, black_box, criterion_group, criterion_main};\nuse dsperse::sch"
  },
  {
    "path": "crates/dsperse/build.rs",
    "chars": 1885,
    "preview": "fn main() {\n    prost_build::Config::new()\n        .compile_protos(&[\"proto/onnx.proto\"], &[\"proto/\"])\n        .expect(\""
  },
  {
    "path": "crates/dsperse/proto/onnx.proto",
    "chars": 41044,
    "preview": "//\n// WARNING: This file is automatically generated!  Please edit onnx.in.proto.\n//\n\n\n// SPDX-License-Identifier: Apache"
  },
  {
    "path": "crates/dsperse/src/backend/jstprove.rs",
    "chars": 19027,
    "preview": "use std::collections::HashMap;\nuse std::path::{Path, PathBuf};\nuse std::sync::{Arc, Mutex};\n\npub use jstprove_circuits::"
  },
  {
    "path": "crates/dsperse/src/backend/mod.rs",
    "chars": 79,
    "preview": "pub mod jstprove;\npub mod onnx;\npub mod traits;\n\npub use traits::ProofBackend;\n"
  },
  {
    "path": "crates/dsperse/src/backend/onnx.rs",
    "chars": 38189,
    "preview": "use std::collections::HashMap;\nuse std::path::Path;\nuse std::sync::Arc;\n\nuse ndarray::IxDyn;\nuse tract_onnx::prelude::*;"
  },
  {
    "path": "crates/dsperse/src/backend/traits.rs",
    "chars": 447,
    "preview": "use std::path::Path;\n\nuse crate::error::Result;\n\npub trait ProofBackend: Send + Sync {\n    fn prove(&self, circuit_path:"
  },
  {
    "path": "crates/dsperse/src/cli/mod.rs",
    "chars": 35557,
    "preview": "use std::num::NonZeroUsize;\nuse std::path::{Path, PathBuf};\n\nuse clap::{Args, Parser, Subcommand};\n\nuse crate::backend::"
  },
  {
    "path": "crates/dsperse/src/converter.rs",
    "chars": 2133,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::Path;\n\nuse jstprove_circuits::api::{\n    self, ArchitectureType"
  },
  {
    "path": "crates/dsperse/src/error.rs",
    "chars": 1079,
    "preview": "use std::path::PathBuf;\n\npub type Result<T> = std::result::Result<T, DsperseError>;\n\n#[derive(Debug, thiserror::Error)]\n"
  },
  {
    "path": "crates/dsperse/src/lib.rs",
    "chars": 186,
    "preview": "pub mod backend;\npub mod cli;\npub mod converter;\npub mod error;\npub mod pipeline;\npub mod schema;\npub mod slicer;\npub mo"
  },
  {
    "path": "crates/dsperse/src/main.rs",
    "chars": 474,
    "preview": "use clap::Parser;\nuse tracing_subscriber::EnvFilter;\n\nuse dsperse::cli;\n\nfn main() {\n    let parsed = cli::Cli::parse();"
  },
  {
    "path": "crates/dsperse/src/pipeline/channel_split.rs",
    "chars": 10942,
    "preview": "use std::collections::HashMap;\nuse std::path::Path;\n\nuse ndarray::{Array4, ArrayD, s};\n\nuse super::runner::{generate_wai"
  },
  {
    "path": "crates/dsperse/src/pipeline/combined.rs",
    "chars": 14393,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::{Path, PathBuf};\n\nuse ndarray::{ArrayD, IxDyn};\n\nuse super::inc"
  },
  {
    "path": "crates/dsperse/src/pipeline/compiler.rs",
    "chars": 69847,
    "preview": "use std::collections::HashMap;\nuse std::path::{Path, PathBuf};\n\nuse rayon::prelude::*;\n\nuse crate::backend::jstprove::Js"
  },
  {
    "path": "crates/dsperse/src/pipeline/dim_split.rs",
    "chars": 16108,
    "preview": "use std::collections::HashMap;\nuse std::path::Path;\n\nuse super::runner::{run_onnx_inference, run_onnx_inference_multi_na"
  },
  {
    "path": "crates/dsperse/src/pipeline/incremental.rs",
    "chars": 8112,
    "preview": "use std::path::{Path, PathBuf};\n\nuse ndarray::ArrayD;\n\nuse crate::error::{DsperseError, Result};\nuse crate::schema::exec"
  },
  {
    "path": "crates/dsperse/src/pipeline/mod.rs",
    "chars": 832,
    "preview": "mod channel_split;\nmod combined;\nmod compiler;\nmod dim_split;\nmod incremental;\npub mod packager;\nmod prover;\npub mod pub"
  },
  {
    "path": "crates/dsperse/src/pipeline/packager.rs",
    "chars": 59333,
    "preview": "use std::collections::HashSet;\nuse std::fs;\nuse std::io::Read;\nuse std::path::{Path, PathBuf};\n\nuse serde::Serialize;\nus"
  },
  {
    "path": "crates/dsperse/src/pipeline/prover.rs",
    "chars": 410,
    "preview": "use std::path::Path;\n\nuse crate::backend::ProofBackend;\nuse crate::error::Result;\nuse crate::schema::execution::RunMetad"
  },
  {
    "path": "crates/dsperse/src/pipeline/publisher.rs",
    "chars": 17928,
    "preview": "use std::fs;\nuse std::path::Path;\nuse std::time::Duration;\n\nuse sha2::{Digest, Sha256};\n\nuse crate::error::{DsperseError"
  },
  {
    "path": "crates/dsperse/src/pipeline/runner.rs",
    "chars": 52974,
    "preview": "use std::collections::HashMap;\nuse std::path::{Path, PathBuf};\n\nuse ndarray::{ArrayD, IxDyn};\n\nuse jstprove_circuits::ap"
  },
  {
    "path": "crates/dsperse/src/pipeline/slice_cache.rs",
    "chars": 1976,
    "preview": "use std::io::Read;\nuse std::path::Path;\n\nuse crate::error::{DsperseError, Result};\n\npub struct SliceAssets {\n    pub cir"
  },
  {
    "path": "crates/dsperse/src/pipeline/stage.rs",
    "chars": 13484,
    "preview": "use std::path::Path;\n\nuse rayon::prelude::*;\n\nuse crate::backend::ProofBackend;\nuse crate::error::{DsperseError, Result}"
  },
  {
    "path": "crates/dsperse/src/pipeline/strategy.rs",
    "chars": 3023,
    "preview": "use crate::error::{DsperseError, Result};\nuse crate::schema::execution::ExecutionMethod;\nuse crate::schema::metadata::Ru"
  },
  {
    "path": "crates/dsperse/src/pipeline/tensor_store.rs",
    "chars": 2174,
    "preview": "use std::collections::HashMap;\n\nuse ndarray::ArrayD;\n\nuse crate::error::{DsperseError, Result};\n\n#[derive(Default)]\npub "
  },
  {
    "path": "crates/dsperse/src/pipeline/tile_executor.rs",
    "chars": 3686,
    "preview": "use std::path::{Path, PathBuf};\n\nuse rayon::prelude::*;\n\nuse crate::error::{DsperseError, Result};\nuse crate::schema::ti"
  },
  {
    "path": "crates/dsperse/src/pipeline/tiled.rs",
    "chars": 37841,
    "preview": "use std::collections::HashMap;\nuse std::path::Path;\nuse std::sync::Arc;\n\nuse ndarray::{Array4, ArrayD, IxDyn, s};\nuse ra"
  },
  {
    "path": "crates/dsperse/src/pipeline/verifier.rs",
    "chars": 459,
    "preview": "use std::path::Path;\n\nuse crate::backend::ProofBackend;\nuse crate::error::Result;\nuse crate::schema::execution::RunMetad"
  },
  {
    "path": "crates/dsperse/src/python.rs",
    "chars": 9238,
    "preview": "use std::path::PathBuf;\n\nuse pyo3::exceptions::PyRuntimeError;\nuse pyo3::prelude::*;\n\nuse crate::backend::jstprove::Jstp"
  },
  {
    "path": "crates/dsperse/src/schema/execution.rs",
    "chars": 7429,
    "preview": "use std::collections::HashMap;\n\nuse serde::{Deserialize, Serialize};\n\nuse super::metadata::{BackendKind, RunSliceMetadat"
  },
  {
    "path": "crates/dsperse/src/schema/metadata.rs",
    "chars": 7702,
    "preview": "use std::collections::HashMap;\n\nuse serde::{Deserialize, Serialize};\n\nuse super::tiling::{ChannelSplitInfo, DimSplitInfo"
  },
  {
    "path": "crates/dsperse/src/schema/mod.rs",
    "chars": 116,
    "preview": "pub mod execution;\npub mod metadata;\npub mod tiling;\n\npub use execution::*;\npub use metadata::*;\npub use tiling::*;\n"
  },
  {
    "path": "crates/dsperse/src/schema/tiling.rs",
    "chars": 6848,
    "preview": "use serde::{self, Deserialize, Deserializer, Serialize};\n\n#[derive(Debug, Clone)]\npub enum SplitStrategy<'a> {\n    Tiled"
  },
  {
    "path": "crates/dsperse/src/slicer/analyzer.rs",
    "chars": 28469,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::Path;\n\nuse serde::{Deserialize, Serialize};\n\nuse super::onnx_pr"
  },
  {
    "path": "crates/dsperse/src/slicer/autotiler.rs",
    "chars": 120300,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::Path;\n\nuse super::onnx_proto::{self, GraphProto, ModelProto, No"
  },
  {
    "path": "crates/dsperse/src/slicer/combiner.rs",
    "chars": 13342,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::{Path, PathBuf};\n\nuse super::onnx_proto::{self, ModelProto, Ten"
  },
  {
    "path": "crates/dsperse/src/slicer/layernorm_fuse.rs",
    "chars": 17363,
    "preview": "use std::collections::{HashMap, HashSet};\n\nuse super::onnx_proto::{\n    AttributeProto, ModelProto, NodeProto, TensorPro"
  },
  {
    "path": "crates/dsperse/src/slicer/materializer.rs",
    "chars": 30289,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::{Path, PathBuf};\n\nuse super::autotiler::{self, ChannelSplitPara"
  },
  {
    "path": "crates/dsperse/src/slicer/mod.rs",
    "chars": 4942,
    "preview": "pub mod analyzer;\npub mod autotiler;\npub mod combiner;\npub(crate) mod layernorm_fuse;\npub mod materializer;\npub(crate) m"
  },
  {
    "path": "crates/dsperse/src/slicer/onnx_fold.rs",
    "chars": 82536,
    "preview": "use std::collections::{HashMap, HashSet};\n\nuse super::onnx_proto::{\n    GraphProto, ModelProto, NodeProto, TensorProto, "
  },
  {
    "path": "crates/dsperse/src/slicer/onnx_proto.rs",
    "chars": 30971,
    "preview": "#[allow(clippy::doc_overindented_list_items)]\npub mod onnx {\n    include!(concat!(env!(\"OUT_DIR\"), \"/onnx.rs\"));\n}\n\nuse "
  },
  {
    "path": "crates/dsperse/src/slicer/onnx_shapes.rs",
    "chars": 7741,
    "preview": "use super::onnx_proto::{ModelProto, ValueInfoProto, onnx};\n\npub fn shape_from_value_info(vi: &ValueInfoProto) -> Option<"
  },
  {
    "path": "crates/dsperse/src/slicer/onnx_slicer.rs",
    "chars": 46669,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::Path;\n\nuse super::analyzer::{self, AnalysisResult, NodeAnalysis"
  },
  {
    "path": "crates/dsperse/src/slicer/self_div_rewrite.rs",
    "chars": 947,
    "preview": "use std::collections::HashMap;\n\nuse super::onnx_proto::{ModelProto, TensorProto};\n\n/// Graph rewrite placeholder: detect"
  },
  {
    "path": "crates/dsperse/src/slicer/trace.rs",
    "chars": 20823,
    "preview": "use std::collections::{HashMap, HashSet};\nuse std::path::Path;\n\nuse super::onnx_proto::ModelProto;\nuse crate::error::{Ds"
  },
  {
    "path": "crates/dsperse/src/utils/io.rs",
    "chars": 6714,
    "preview": "use std::collections::HashMap;\nuse std::path::Path;\n\nuse ndarray::{ArrayD, Axis, IxDyn};\nuse rmpv::Value;\n\nuse crate::er"
  },
  {
    "path": "crates/dsperse/src/utils/limits.rs",
    "chars": 2926,
    "preview": "use std::io::Read;\nuse std::path::Path;\n\nuse crate::error::{DsperseError, Result};\n\npub fn reject_symlink(path: &Path) -"
  },
  {
    "path": "crates/dsperse/src/utils/metadata.rs",
    "chars": 598,
    "preview": "use std::path::Path;\n\nuse crate::error::{DsperseError, Result};\nuse crate::schema::RunMetadata;\n\npub fn load_run_metadat"
  },
  {
    "path": "crates/dsperse/src/utils/mod.rs",
    "chars": 61,
    "preview": "pub mod io;\npub mod limits;\npub mod metadata;\npub mod paths;\n"
  },
  {
    "path": "crates/dsperse/src/utils/paths.rs",
    "chars": 3131,
    "preview": "use std::path::{Component, Path, PathBuf};\n\nuse crate::error::{DsperseError, Result};\n\npub const METADATA_FILE: &str = \""
  },
  {
    "path": "crates/dsperse/src/version.rs",
    "chars": 642,
    "preview": "use serde::{Deserialize, Serialize};\n\n#[derive(Debug, Clone, Serialize, Deserialize)]\npub struct DsperseVersion {\n    pu"
  },
  {
    "path": "crates/dsperse/tests/integration_slice.rs",
    "chars": 7594,
    "preview": "use std::path::Path;\n\nuse dsperse::schema::metadata::ModelMetadata;\n\nfn test_models_dir() -> &'static Path {\n    Path::n"
  },
  {
    "path": "crates/dsperse/tests/schema_roundtrip.rs",
    "chars": 11781,
    "preview": "use std::path::Path;\n\nuse dsperse::schema::*;\n\n#[test]\nfn model_metadata_roundtrip() {\n    let json = r#\"{\n        \"orig"
  },
  {
    "path": "crates/dsperse/tests/sn2_contract.rs",
    "chars": 5536,
    "preview": "use std::path::Path;\n\nuse ndarray::{ArrayD, IxDyn};\nuse rmpv::Value;\n\nfn make_value_array(vals: &[f64]) -> Value {\n    V"
  },
  {
    "path": "deny.toml",
    "chars": 258,
    "preview": "[graph]\ntargets = []\nall-features = false\n\n[advisories]\nyanked = \"warn\"\n\n[bans]\nmultiple-versions = \"warn\"\nwildcards = \""
  },
  {
    "path": "docs/JSTPROVE_BACKEND.md",
    "chars": 2557,
    "preview": "# JSTprove Backend Integration\n\n## Overview\n\nDSperse uses [JSTprove](https://github.com/inference-labs-inc/JSTprove) as "
  },
  {
    "path": "docs/overview.md",
    "chars": 2266,
    "preview": "# DSperse: Distributed zkML\n\n## Overview\n\nDSperse is a proving-system-agnostic intelligent slicer for verifiable AI. It "
  },
  {
    "path": "docs/uv_packaging.md",
    "chars": 1100,
    "preview": "# Developer Guide\n\nThis document provides a guide for developers who contribute to the project.\n\n## Build System\n\nThe pr"
  },
  {
    "path": "pyproject.toml",
    "chars": 500,
    "preview": "[build-system]\nrequires = [\"maturin>=1.0,<2.0\"]\nbuild-backend = \"maturin\"\n\n[project]\nname = \"dsperse\"\nversion = \"0.0.0\"\n"
  },
  {
    "path": "python/dsperse/__init__.py",
    "chars": 279,
    "preview": "from dsperse._native import (\n    slice_model,\n    compile_slices,\n    run_inference,\n    prove_run,\n    verify_run,\n   "
  },
  {
    "path": "python/dsperse/cli.py",
    "chars": 527,
    "preview": "import sys\n\n\ndef main():\n    try:\n        from dsperse._native import cli_main\n    except ImportError:\n        print(\"ds"
  },
  {
    "path": "rust-toolchain.toml",
    "chars": 78,
    "preview": "[toolchain]\nchannel = \"nightly-2026-02-22\"\ncomponents = [\"clippy\", \"rustfmt\"]\n"
  }
]

About this extraction

This page contains the full source code of the inference-labs-inc/dsperse GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 72 files (939.0 KB), approximately 236.0k tokens, and a symbol index with 908 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!