Full Code of jcmgray/cotengra for AI

main e0f164f452e6 cached
118 files
8.3 MB
2.2M tokens
1396 symbols
1 requests
Download .txt
Showing preview only (8,712K chars total). Download the full file or copy to clipboard to get everything.
Repository: jcmgray/cotengra
Branch: main
Commit: e0f164f452e6
Files: 118
Total size: 8.3 MB

Directory structure:
gitextract_lc2zkv6z/

├── .codecov.yml
├── .gitattributes
├── .github/
│   ├── dependabot.yml
│   └── workflows/
│       ├── pypi-release.yml
│       └── test.yml
├── .gitignore
├── .readthedocs.yml
├── LICENSE.md
├── MANIFEST.in
├── README.md
├── cotengra/
│   ├── __init__.py
│   ├── contract.py
│   ├── core.py
│   ├── core_multi.py
│   ├── experimental/
│   │   ├── __init__.py
│   │   ├── hyper_de.py
│   │   ├── hyper_pe.py
│   │   ├── hyper_pymoo.py
│   │   ├── hyper_scipy.py
│   │   ├── hyper_smac.py
│   │   ├── multi.ipynb
│   │   ├── path_compressed_branchbound.py
│   │   ├── path_compressed_mcts.py
│   │   └── scoring.py
│   ├── hypergraph.py
│   ├── hyperoptimizers/
│   │   ├── __init__.py
│   │   ├── _param_mapping.py
│   │   ├── hyper.py
│   │   ├── hyper_cmaes.py
│   │   ├── hyper_es.py
│   │   ├── hyper_neldermead.py
│   │   ├── hyper_nevergrad.py
│   │   ├── hyper_optuna.py
│   │   ├── hyper_random.py
│   │   ├── hyper_sbplx.py
│   │   └── hyper_skopt.py
│   ├── interface.py
│   ├── nodeops.py
│   ├── oe.py
│   ├── parallel.py
│   ├── pathfinders/
│   │   ├── __init__.py
│   │   ├── kahypar_profiles/
│   │   │   ├── cut_kKaHyPar_sea20.ini
│   │   │   ├── cut_rKaHyPar_sea20.ini
│   │   │   ├── km1_kKaHyPar_sea20.ini
│   │   │   ├── km1_rKaHyPar_sea20.ini
│   │   │   └── old/
│   │   │       ├── cut_kKaHyPar_sea20.ini
│   │   │       ├── cut_rKaHyPar_sea20.ini
│   │   │       ├── km1_kKaHyPar_sea20.ini
│   │   │       └── km1_rKaHyPar_sea20.ini
│   │   ├── path_basic.py
│   │   ├── path_compressed.py
│   │   ├── path_compressed_greedy.py
│   │   ├── path_edgesort.py
│   │   ├── path_flowcutter.py
│   │   ├── path_greedy.py
│   │   ├── path_igraph.py
│   │   ├── path_kahypar.py
│   │   ├── path_labels.py
│   │   ├── path_quickbb.py
│   │   ├── path_random.py
│   │   ├── path_simulated_annealing.py
│   │   └── treedecomp.py
│   ├── plot.py
│   ├── presets.py
│   ├── reusable.py
│   ├── schematic.py
│   ├── scoring.py
│   ├── slicer.py
│   └── utils.py
├── docs/
│   ├── Makefile
│   ├── _pygments/
│   │   ├── _pygments_dark.py
│   │   └── _pygments_light.py
│   ├── _static/
│   │   └── my-styles.css
│   ├── advanced.ipynb
│   ├── basics.ipynb
│   ├── changelog.md
│   ├── conf.py
│   ├── contraction.ipynb
│   ├── examples/
│   │   ├── ex_compressed_contraction.ipynb
│   │   ├── ex_large_output_lazy.ipynb
│   │   └── ex_trace_contraction_to_matmuls.ipynb
│   ├── high-level-interface.ipynb
│   ├── index.md
│   ├── index_examples.md
│   ├── installation.md
│   ├── make.bat
│   ├── trees.ipynb
│   └── visualization.ipynb
├── examples/
│   ├── Example - Reproducing 2005.06787.ipynb
│   ├── Example - Reproducing 2103-03074.ipynb
│   ├── Quantum Circuit Example Old.ipynb
│   ├── Quantum Circuit Example.ipynb
│   ├── benchmarks/
│   │   ├── cubic_6x6x10.json
│   │   ├── mps_mpo_L100_chi64_D5.json
│   │   ├── peps_cluster_r2_D10_a.json
│   │   ├── qucirc_rrzz_n56_d13.json
│   │   ├── rand_50_5_a.json
│   │   ├── randreg_200_3_a.json
│   │   ├── rtree_100_a.json
│   │   └── sycamore_n53_m20_s0_e0_pABCDCDAB.json
│   ├── circuit_n53_m10_s0_e0_pABCDCDAB.qsim
│   ├── circuit_n53_m12_s0_e0_pABCDCDAB.qsim
│   ├── circuit_n53_m20_s0_e0_pABCDCDAB.qsim
│   ├── ex_jax.py
│   ├── ex_mpi_executor.py
│   └── ex_mpi_spmd.py
├── pyproject.toml
└── tests/
    ├── __init__.py
    ├── test_backends.py
    ├── test_compressed.py
    ├── test_compute.py
    ├── test_hypergraph.py
    ├── test_interface.py
    ├── test_optimizers.py
    ├── test_parallel.py
    ├── test_paths_basic.py
    ├── test_slicer.py
    └── test_tree.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .codecov.yml
================================================
codecov:
  require_ci_to_pass: yes

coverage:
  range: 50..100
  status:
    project:
      default:
        informational: true
    patch:
      default:
        informational: true
    changes: false

comment: off


================================================
FILE: .gitattributes
================================================
# Auto detect text files and perform LF normalization
* text=auto

# Standard to msysgit
*.doc	 diff=astextplain
*.DOC	 diff=astextplain
*.docx diff=astextplain
*.DOCX diff=astextplain
*.dot  diff=astextplain
*.DOT  diff=astextplain
*.pdf  diff=astextplain
*.PDF	 diff=astextplain
*.rtf	 diff=astextplain
*.RTF	 diff=astextplain

# include the version number in git archive
cotengra/_version.py export-subst

# make cotengra appear as a python project on github
*.ipynb linguist-language=Python

# SCM syntax highlighting & preventing 3-way merges
pixi.lock merge=binary linguist-language=YAML linguist-generated=true


================================================
FILE: .github/dependabot.yml
================================================
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file

version: 2
updates:
  # Enable Dependabot for GitHub Actions
  - package-ecosystem: "github-actions"
    directory: "/"  # Location of your workflows, typically the root folder
    schedule:
      interval: "daily"  # Frequency of update checks (daily, weekly, or monthly)


================================================
FILE: .github/workflows/pypi-release.yml
================================================
name: Build and Upload cotengra to PyPI
on:
  release:
    types:
      - published
  push:
    tags:
      - 'v*'

jobs:
  build-artifacts:
    runs-on: ubuntu-latest
    if: github.repository == 'jcmgray/cotengra'
    steps:
      - uses: actions/checkout@v6
        with:
          fetch-depth: 0
      - uses: actions/setup-python@v6
        name: Install Python
        with:
          python-version: "3.12"

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          python -m pip install build twine

      - name: Build tarball and wheels
        run: |
          git clean -xdf
          git restore -SW .
          python -m build

      - name: Check built artifacts
        run: |
          python -m twine check --strict dist/*
          pwd
          if [ -f dist/cotengra-0.0.0.tar.gz ]; then
            echo "❌ INVALID VERSION NUMBER"
            exit 1
          else
            echo "✅ Looks good"
          fi
      - uses: actions/upload-artifact@v7
        with:
          name: releases
          path: dist

  test-built-dist:
    needs: build-artifacts
    runs-on: ubuntu-latest
    steps:
      - uses: actions/setup-python@v6
        name: Install Python
        with:
          python-version: "3.12"
      - uses: actions/download-artifact@v8
        with:
          name: releases
          path: dist
      - name: List contents of built dist
        run: |
          ls -ltrh
          ls -ltrh dist

      - name: Verify the built dist/wheel is valid
        if: github.event_name == 'push'
        run: |
          python -m pip install --upgrade pip
          python -m pip install dist/cotengra*.whl

  upload-to-test-pypi:
    needs: test-built-dist
    if: github.event_name == 'push'
    runs-on: ubuntu-latest

    environment:
      name: pypi
      url: https://test.pypi.org/p/cotengra
    permissions:
      id-token: write

    steps:
      - uses: actions/download-artifact@v8
        with:
          name: releases
          path: dist
      - name: Publish package to TestPyPI
        if: github.event_name == 'push'
        uses: pypa/gh-action-pypi-publish@v1.14.0
        with:
          repository-url: https://test.pypi.org/legacy/
          verbose: true


  upload-to-pypi:
    needs: test-built-dist
    if: github.event_name == 'release'
    runs-on: ubuntu-latest

    environment:
      name: pypi
      url: https://pypi.org/p/cotengra
    permissions:
      id-token: write

    steps:
      - uses: actions/download-artifact@v8
        with:
          name: releases
          path: dist
      - name: Publish package to PyPI
        uses: pypa/gh-action-pypi-publish@v1.14.0
        with:
          verbose: true

================================================
FILE: .github/workflows/test.yml
================================================
name: Tests

on:
  workflow_dispatch:
  push:
  pull_request:

defaults:
  run:
    shell: bash -l {0}

jobs:
  run-tests:
    runs-on: ${{ matrix.os }}
    strategy:
      matrix:
        include:
        - os: ubuntu-latest
          pixi_environment: testminimal
          pixi_task: test

        - os: ubuntu-latest
          pixi_environment: testpyold
          pixi_task: test

        - os: ubuntu-latest
          pixi_environment: testpynew
          pixi_task: test

        - os: macos-latest
          pixi_environment: testpymid
          pixi_task: test

        - os: windows-latest
          pixi_environment: testpymid
          pixi_task: test

        - os: ubuntu-latest
          pixi_environment: testtorch
          pixi_task: test-backends

        - os: ubuntu-latest
          pixi_environment: testjax
          pixi_task: test-backends

        - os: ubuntu-latest
          pixi_environment: testtensorflow
          pixi_task: test-backends

    steps:
    - uses: actions/checkout@v6

    - uses: prefix-dev/setup-pixi@v0.9.5
      with:
        environments: ${{ matrix.pixi_environment }}

    - name: Test with pytest
      run: pixi run -e ${{ matrix.pixi_environment }} ${{ matrix.pixi_task }}

    - name: Report to codecov
      uses: codecov/codecov-action@v6
      with:
        token: ${{ secrets.CODECOV_TOKEN }}


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover

# Translations
*.mo
*.pot

# Django stuff:
*.log

# Sphinx documentation
docs/_build/

# PyBuilder
target/

.pytest_cache
.vscode

**/.ipynb_checkpoints/

# Added by cargo
/target
Cargo.lock

cotengra/_version.py

experiments

# pixi environments
.pixi/*
!.pixi/config.toml

================================================
FILE: .readthedocs.yml
================================================
version: 2

sphinx:
  configuration: docs/conf.py

build:
  os: "ubuntu-24.04"
  tools:
    python: "latest"
  jobs:
      create_environment:
         - asdf plugin add pixi
         - asdf install pixi latest
         - asdf global pixi latest
      install:
         - pixi install -e docs
      build:
         html:
            - pixi run readthedocs

formats: []


================================================
FILE: LICENSE.md
================================================

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "{}"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: MANIFEST.in
================================================
include cotengra/kahypar_profiles/*.ini
include cotengra/kahypar_profiles/old/*.ini
include LICENSE.md
include README.md
graft tests


================================================
FILE: README.md
================================================
<p align="left"><img src="https://imgur.com/OM5XyaD.png" alt="cotengra" width="400px"></p>

[![tests](https://github.com/jcmgray/cotengra/actions/workflows/test.yml/badge.svg)](https://github.com/jcmgray/cotengra/actions/workflows/test.yml)
[![codecov](https://codecov.io/gh/jcmgray/cotengra/branch/main/graph/badge.svg?token=Q5evNiuT9S)](https://codecov.io/gh/jcmgray/cotengra)
[![Docs](https://readthedocs.org/projects/cotengra/badge/?version=latest)](https://cotengra.readthedocs.io)
[![PyPI](https://img.shields.io/pypi/v/cotengra?color=teal)](https://pypi.org/project/cotengra/)
[![Anaconda-Server Badge](https://anaconda.org/conda-forge/cotengra/badges/version.svg)](https://anaconda.org/conda-forge/cotengra)
[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)

`cotengra` is a python library for contracting tensor networks or einsum
expressions involving large numbers of tensors - the main docs can be found
at [cotengra.readthedocs.io](https://cotengra.readthedocs.io/).
Some of the key feautures of `cotengra` include:

* drop-in ``einsum`` and ``ncon`` replacement
* an explicit **contraction tree** object that can be flexibly built, modified and visualized
* a **'hyper optimizer'** that samples trees while tuning the generating meta-paremeters
* **dynamic slicing** for massive memory savings and parallelism
* **simulated annealing** as an alternative optimizing and slicing strategy
* support for **hyper** edge tensor networks and thus arbitrary einsum equations
* **paths** that can be supplied to [`numpy.einsum`](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html), [`opt_einsum`](https://dgasmith.github.io/opt_einsum/), [`quimb`](https://quimb.readthedocs.io/en/latest/) among others
* **performing contractions** with tensors from many libraries via [`autoray`](https://github.com/jcmgray/autoray),
  even if they don't provide `einsum` or `tensordot` but do have (batch) matrix
  multiplication

<p align="center"><img src="https://imgur.com/jMO138y.png" alt="cotengra" width="500px"></p>


================================================
FILE: cotengra/__init__.py
================================================
"""Hyper optimized contraction trees for large tensor networks and einsums."""

from importlib.metadata import PackageNotFoundError as _PackageNotFoundError
from importlib.metadata import version as _version

try:
    __version__ = _version("cotengra")
except _PackageNotFoundError:
    try:
        # fallback for source trees where hatch-vcs has generated _version.py.
        from ._version import version as __version__
    except ImportError:
        __version__ = "0.0.0+unknown"

import functools
import warnings

from . import utils
from .core import (
    ContractionTree,
    ContractionTreeCompressed,
)
from .core_multi import (
    ContractionTreeMulti,
)
from .hypergraph import (
    HyperGraph,
    get_hypergraph,
)
from .hyperoptimizers import (
    hyper_cmaes,
    hyper_es,
    hyper_neldermead,
    hyper_nevergrad,
    hyper_optuna,
    hyper_random,
    hyper_sbplx,
    hyper_skopt,
)
from .hyperoptimizers.hyper import (
    HyperCompressedOptimizer,
    HyperMultiOptimizer,
    HyperOptimizer,
    ReusableHyperCompressedOptimizer,
    ReusableHyperOptimizer,
    get_hyper_space,
    list_hyper_functions,
)
from .interface import (
    array_contract,
    array_contract_expression,
    array_contract_path,
    array_contract_tree,
    einsum,
    einsum_expression,
    einsum_tree,
    ncon,
    register_preset,
)
from .oe import PathOptimizer
from .pathfinders import (
    path_basic,
    path_compressed_greedy,
    path_greedy,
    path_igraph,
    path_kahypar,
    path_labels,
)
from .pathfinders.path_basic import (
    GreedyOptimizer,
    OptimalOptimizer,
    RandomGreedyOptimizer,
    ReusableRandomGreedyOptimizer,
    edge_path_to_linear,
    edge_path_to_ssa,
    linear_to_ssa,
    ssa_to_linear,
)
from .pathfinders.path_flowcutter import (
    FlowCutterOptimizer,
    optimize_flowcutter,
)
from .pathfinders.path_quickbb import QuickBBOptimizer, optimize_quickbb
from .pathfinders.path_random import RandomOptimizer
from .plot import (
    plot_contractions,
    plot_contractions_alt,
    plot_scatter,
    plot_scatter_alt,
    plot_slicings,
    plot_slicings_alt,
    plot_tree,
    plot_tree_ring,
    plot_tree_span,
    plot_tree_tent,
    plot_trials,
    plot_trials_alt,
)
from .presets import (
    AutoHQOptimizer,
    AutoOptimizer,
    auto_hq_optimize,
    auto_optimize,
    greedy_optimize,
    optimal_optimize,
    optimal_outer_optimize,
)
from .reusable import (
    hash_contraction,
)
from .slicer import SliceFinder
from .utils import (
    get_symbol,
    get_symbol_map,
)

UniformOptimizer = functools.partial(HyperOptimizer, optlib="random")
"""Does no gaussian process tuning by default, just randomly samples - requires
no optimization library.
"""

contract_expression = einsum_expression
"""Alias for :func:`cotengra.einsum_expression`."""

contract = einsum
"""Alias for :func:`cotengra.einsum`."""


__all__ = (
    "array_contract_expression",
    "array_contract_path",
    "array_contract_tree",
    "array_contract",
    "auto_hq_optimize",
    "auto_optimize",
    "AutoHQOptimizer",
    "AutoOptimizer",
    "contract_expression",
    "contract",
    "ContractionTree",
    "ContractionTreeCompressed",
    "ContractionTreeMulti",
    "edge_path_to_linear",
    "edge_path_to_ssa",
    "einsum_expression",
    "einsum_tree",
    "einsum",
    "FlowCutterOptimizer",
    "get_hyper_space",
    "get_hypergraph",
    "get_symbol_map",
    "get_symbol",
    "greedy_optimize",
    "GreedyOptimizer",
    "hash_contraction",
    "hyper_cmaes",
    "hyper_nevergrad",
    "hyper_neldermead",
    "hyper_optimize",
    "hyper_optuna",
    "hyper_random",
    "hyper_es",
    "hyper_skopt",
    "hyper_sbplx",
    "HyperCompressedOptimizer",
    "HyperGraph",
    "HyperMultiOptimizer",
    "HyperOptimizer",
    "linear_to_ssa",
    "list_hyper_functions",
    "ncon",
    "optimal_optimize",
    "optimal_outer_optimize",
    "OptimalOptimizer",
    "optimize_flowcutter",
    "optimize_quickbb",
    "path_basic",
    "path_compressed_greedy",
    "path_greedy",
    "path_igraph",
    "path_kahypar",
    "path_labels",
    "PathOptimizer",
    "plot_contractions_alt",
    "plot_contractions",
    "plot_scatter_alt",
    "plot_scatter",
    "plot_slicings_alt",
    "plot_slicings",
    "plot_tree_ring",
    "plot_tree_span",
    "plot_tree_tent",
    "plot_tree",
    "plot_trials_alt",
    "plot_trials",
    "QuasiRandOptimizer",
    "QuickBBOptimizer",
    "RandomGreedyOptimizer",
    "RandomOptimizer",
    "register_preset",
    "ReusableHyperCompressedOptimizer",
    "ReusableHyperOptimizer",
    "ReusableRandomGreedyOptimizer",
    "SliceFinder",
    "ssa_to_linear",
    "UniformOptimizer",
    "utils",
)


# add some presets


def hyper_optimize(
    inputs,
    output,
    size_dict,
    memory_limit=None,
    get="path",
    **opts,
):
    if memory_limit is not None:
        warnings.warn(
            "`memory_limit` is not supported in hyper_optimize, ignoring."
        )

    optimizer = HyperOptimizer(**opts)
    if get == "path":
        return optimizer(inputs, output, size_dict)
    elif get == "tree":
        return optimizer.search(inputs, output, size_dict)
    else:
        raise ValueError(f"Unknown get option {get}")


def hyper_compressed_optimize(
    inputs,
    output,
    size_dict,
    get="path",
    **opts,
):
    optimizer = HyperCompressedOptimizer(**opts)

    if get == "path":
        return optimizer(inputs, output, size_dict)
    elif get == "tree":
        return optimizer.search(inputs, output, size_dict)
    else:
        raise ValueError(f"Unknown get option {get}")


def random_greedy_optimize(
    inputs, output, size_dict, memory_limit=None, **opts
):
    if memory_limit is not None:
        warnings.warn(
            "`memory_limit` is not supported in "
            "random_greedy_optimize, ignoring."
        )

    optimizer = RandomGreedyOptimizer(**opts)
    return optimizer(inputs, output, size_dict)


try:
    register_preset(
        "hyper",
        hyper_optimize,
        optimizer_tree=functools.partial(hyper_optimize, get="tree"),
    )
    register_preset(
        "hyper-256",
        functools.partial(hyper_optimize, max_repeats=256),
        optimizer_tree=functools.partial(
            hyper_optimize, max_repeats=256, get="tree"
        ),
    )
    register_preset(
        "hyper-greedy",
        functools.partial(hyper_optimize, methods=["greedy"]),
        optimizer_tree=functools.partial(
            hyper_optimize, methods=["greedy"], get="tree"
        ),
    )
    register_preset(
        "hyper-labels",
        functools.partial(hyper_optimize, methods=["labels"]),
        optimizer_tree=functools.partial(
            hyper_optimize, methods=["labels"], get="tree"
        ),
    )
    register_preset(
        "hyper-kahypar",
        functools.partial(hyper_optimize, methods=["kahypar"]),
        optimizer_tree=functools.partial(
            hyper_optimize, methods=["kahypar"], get="tree"
        ),
    )
    register_preset(
        "hyper-balanced",
        functools.partial(
            hyper_optimize, methods=["kahypar-balanced"], max_repeats=16
        ),
        optimizer_tree=functools.partial(
            hyper_optimize,
            methods=["kahypar-balanced"],
            max_repeats=16,
            get="tree",
        ),
    )
    register_preset(
        "hyper-compressed",
        hyper_compressed_optimize,
        optimizer_tree=functools.partial(
            hyper_compressed_optimize,
            get="tree",
        ),
        compressed=True,
    )
    register_preset(
        "hyper-spinglass",
        functools.partial(hyper_optimize, methods=["spinglass"]),
    )
    register_preset(
        "hyper-betweenness",
        functools.partial(hyper_optimize, methods=["betweenness"]),
    )
    register_preset(
        "random-greedy",
        random_greedy_optimize,
    )
    register_preset(
        "random-greedy-128",
        functools.partial(random_greedy_optimize, max_repeats=128),
    )
    register_preset(
        "flowcutter-2",
        functools.partial(optimize_flowcutter, max_time=2),
    )
    register_preset(
        "flowcutter-10",
        functools.partial(optimize_flowcutter, max_time=10),
    )
    register_preset(
        "flowcutter-60",
        functools.partial(optimize_flowcutter, max_time=60),
    )
    register_preset(
        "quickbb-2",
        functools.partial(optimize_quickbb, max_time=2),
    )
    register_preset(
        "quickbb-10",
        functools.partial(optimize_quickbb, max_time=10),
    )
    register_preset(
        "quickbb-60",
        functools.partial(optimize_quickbb, max_time=60),
    )
    register_preset(
        "greedy-compressed",
        path_compressed_greedy.greedy_compressed,
        path_compressed_greedy.trial_greedy_compressed,
        compressed=True,
    )
    register_preset(
        "greedy-span",
        path_compressed_greedy.greedy_span,
        path_compressed_greedy.trial_greedy_span,
        compressed=True,
    )
except KeyError:
    # KeyError: if reloading cotengra e.g. library entries already registered
    pass


================================================
FILE: cotengra/contract.py
================================================
"""Functionality relating to actually contracting."""

import contextlib
import functools
import itertools
import operator

from autoray import do, get_namespace, infer_backend_multi, shape

DEFAULT_IMPLEMENTATION = "auto"


def set_default_implementation(impl):
    global DEFAULT_IMPLEMENTATION
    DEFAULT_IMPLEMENTATION = impl


def get_default_implementation():
    return DEFAULT_IMPLEMENTATION


@contextlib.contextmanager
def default_implementation(impl):
    """Context manager for temporarily setting the default implementation."""
    global DEFAULT_IMPLEMENTATION
    old_impl = DEFAULT_IMPLEMENTATION
    DEFAULT_IMPLEMENTATION = impl
    try:
        yield
    finally:
        DEFAULT_IMPLEMENTATION = old_impl


@functools.lru_cache(2**12)
def _sanitize_equation(eq):
    """Get the input and output indices of an equation, computing the output
    implicitly as the sorted sequence of every index that appears exactly once
    if it is not provided.
    """
    # remove spaces
    eq = eq.replace(" ", "")

    if "..." in eq:
        raise NotImplementedError("Ellipsis not supported.")

    if "->" not in eq:
        lhs = eq
        tmp_subscripts = lhs.replace(",", "")
        out = "".join(
            # sorted sequence of indices
            s
            for s in sorted(set(tmp_subscripts))
            # that appear exactly once
            if tmp_subscripts.count(s) == 1
        )
    else:
        lhs, out = eq.split("->")
    return lhs, out


@functools.lru_cache(2**12)
def _parse_einsum_single(eq, shape):
    """Cached parsing of a single term einsum equation into the necessary
    sequence of arguments for axes diagonals, sums, and transposes.
    """
    lhs, out = _sanitize_equation(eq)

    # parse each index
    need_to_diag = []
    need_to_sum = []
    seen = set()
    for ix in lhs:
        if ix in need_to_diag:
            continue
        if ix in seen:
            need_to_diag.append(ix)
            continue
        seen.add(ix)
        if ix not in out:
            need_to_sum.append(ix)

    # first handle diagonal reductions
    if need_to_diag:
        diag_sels = []
        sizes = dict(zip(lhs, shape))
        while need_to_diag:
            ixd = need_to_diag.pop()
            dinds = tuple(range(sizes[ixd]))

            # construct advanced indexing object
            selector = tuple(dinds if ix == ixd else slice(None) for ix in lhs)
            diag_sels.append(selector)

            # after taking the diagonal what are new indices?
            ixd_contig = ixd * lhs.count(ixd)
            if ixd_contig in lhs:
                # contig axes, new axis is at same position
                lhs = lhs.replace(ixd_contig, ixd)
            else:
                # non-contig, new axis is at beginning
                lhs = ixd + lhs.replace(ixd, "")
    else:
        diag_sels = None

    # then sum reductions
    if need_to_sum:
        sum_axes = tuple(map(lhs.index, need_to_sum))
        for ix in need_to_sum:
            lhs = lhs.replace(ix, "")
    else:
        sum_axes = None

    # then transposition
    if lhs == out:
        perm = None
    else:
        perm = tuple(lhs.index(ix) for ix in out)

    return diag_sels, sum_axes, perm


def _parse_eq_to_pure_multiplication(a_term, shape_a, b_term, shape_b, out):
    """If there are no contracted indices, then we can directly transpose and
    insert singleton dimensions into ``a`` and ``b`` such that (broadcast)
    elementwise multiplication performs the einsum.

    No need to cache this as it is within the cached
    ``_parse_eq_to_batch_matmul``.

    """
    desired_a = ""
    desired_b = ""
    new_shape_a = []
    new_shape_b = []
    for ix in out:
        if ix in a_term:
            desired_a += ix
            new_shape_a.append(shape_a[a_term.index(ix)])
        else:
            new_shape_a.append(1)
        if ix in b_term:
            desired_b += ix
            new_shape_b.append(shape_b[b_term.index(ix)])
        else:
            new_shape_b.append(1)

    if desired_a != a_term:
        eq_a = f"{a_term}->{desired_a}"
    else:
        eq_a = None
    if desired_b != b_term:
        eq_b = f"{b_term}->{desired_b}"
    else:
        eq_b = None

    return (
        eq_a,
        eq_b,
        new_shape_a,
        new_shape_b,
        None,  # new_shape_ab, not needed since not fusing
        None,  # perm_ab, not needed as we transpose a and b first
        True,  # pure_multiplication=True
    )


@functools.lru_cache(2**12)
def _parse_eq_to_batch_matmul(eq, shape_a, shape_b):
    """Cached parsing of a two term einsum equation into the necessary
    sequence of arguments for contracttion via batched matrix multiplication.
    The steps we need to specify are:

        1. Remove repeated and trivial indices from the left and right terms,
           and transpose them, done as a single einsum.
        2. Fuse the remaining indices so we have two 3D tensors.
        3. Perform the batched matrix multiplication.
        4. Unfuse the output to get the desired final index order.

    """
    lhs, out = eq.split("->")
    a_term, b_term = lhs.split(",")

    if len(a_term) != len(shape_a):
        raise ValueError(f"Term '{a_term}' does not match shape {shape_a}.")
    if len(b_term) != len(shape_b):
        raise ValueError(f"Term '{b_term}' does not match shape {shape_b}.")

    sizes = {}
    singletons = set()

    # parse left term to unique indices with size > 1
    left = {}
    for ix, d in zip(a_term, shape_a):
        if d == 1:
            # everything (including broadcasting) works nicely if simply ignore
            # such dimensions, but we do need to track if they appear in output
            # and thus should be reintroduced later
            singletons.add(ix)
            continue
        if sizes.setdefault(ix, d) != d:
            # set and check size
            raise ValueError(
                f"Index {ix} has mismatched sizes {sizes[ix]} and {d}."
            )
        left[ix] = True

    # parse right term to unique indices with size > 1
    right = {}
    for ix, d in zip(b_term, shape_b):
        # broadcast indices (size 1 on one input and size != 1
        # on the other) should not be treated as singletons
        if d == 1:
            if ix not in left:
                singletons.add(ix)
            continue
        singletons.discard(ix)

        if sizes.setdefault(ix, d) != d:
            # set and check size
            raise ValueError(
                f"Index {ix} has mismatched sizes {sizes[ix]} and {d}."
            )
        right[ix] = True

    # now we classify the unique size > 1 indices only
    bat_inds = []  # appears on A, B, O
    con_inds = []  # appears on A, B, .
    a_keep = []  # appears on A, ., O
    b_keep = []  # appears on ., B, O
    # other indices (appearing on A or B only) will
    # be summed or traced out prior to the matmul
    for ix in left:
        if right.pop(ix, False):
            if ix in out:
                bat_inds.append(ix)
            else:
                con_inds.append(ix)
        elif ix in out:
            a_keep.append(ix)
    # now only indices unique to right remain
    for ix in right:
        if ix in out:
            b_keep.append(ix)

    if not con_inds:
        # contraction is pure multiplication, prepare inputs differently
        return _parse_eq_to_pure_multiplication(
            a_term, shape_a, b_term, shape_b, out
        )

    # only need the size one indices that appear in the output
    singletons = [ix for ix in out if ix in singletons]

    # take diagonal, remove any trivial axes and transpose left
    desired_a = "".join((*bat_inds, *a_keep, *con_inds))
    if a_term != desired_a:
        if set(a_term) == set(desired_a):
            # only need to transpose, don't invoke einsum
            eq_a = tuple(a_term.index(ix) for ix in desired_a)
        else:
            eq_a = f"{a_term}->{desired_a}"
    else:
        eq_a = None

    # take diagonal, remove any trivial axes and transpose right
    desired_b = "".join((*bat_inds, *con_inds, *b_keep))
    if b_term != desired_b:
        if set(b_term) == set(desired_b):
            # only need to transpose, don't invoke einsum
            eq_b = tuple(b_term.index(ix) for ix in desired_b)
        else:
            eq_b = f"{b_term}->{desired_b}"
    else:
        eq_b = None

    # then we want to reshape
    if bat_inds:
        lgroups = (bat_inds, a_keep, con_inds)
        rgroups = (bat_inds, con_inds, b_keep)
        ogroups = (bat_inds, a_keep, b_keep)
    else:
        # avoid size 1 batch dimension if no batch indices
        lgroups = (a_keep, con_inds)
        rgroups = (con_inds, b_keep)
        ogroups = (a_keep, b_keep)

    if any(len(group) != 1 for group in lgroups):
        # need to fuse 'kept' and contracted indices
        # (though could allow batch indices to be broadcast)
        new_shape_a = tuple(
            functools.reduce(operator.mul, (sizes[ix] for ix in ix_group), 1)
            for ix_group in lgroups
        )
    else:
        new_shape_a = None

    if any(len(group) != 1 for group in rgroups):
        # need to fuse 'kept' and contracted indices
        # (though could allow batch indices to be broadcast)
        new_shape_b = tuple(
            functools.reduce(operator.mul, (sizes[ix] for ix in ix_group), 1)
            for ix_group in rgroups
        )
    else:
        new_shape_b = None

    if any(len(group) != 1 for group in ogroups) or singletons:
        new_shape_ab = (1,) * len(singletons) + tuple(
            sizes[ix] for ix_group in ogroups for ix in ix_group
        )
    else:
        new_shape_ab = None

    # then we might need to permute the matmul produced output:
    out_produced = "".join((*singletons, *bat_inds, *a_keep, *b_keep))
    if out_produced != out:
        perm_ab = tuple(out_produced.index(ix) for ix in out)
    else:
        perm_ab = None

    return (
        eq_a,
        eq_b,
        new_shape_a,
        new_shape_b,
        new_shape_ab,
        perm_ab,
        False,  # pure_multiplication=False
    )


def _einsum_single(eq, x, backend=None):
    """Einsum on a single tensor, via three steps: diagonal selection
    (via advanced indexing), axes summations, transposition. The logic for each
    is cached based on the equation and array shape, and each step is only
    performed if necessary.
    """
    try:
        return do("einsum", eq, x, like=backend)
    except ImportError:
        pass

    diag_sels, sum_axes, perm = _parse_einsum_single(eq, shape(x))

    if diag_sels is not None:
        # diagonal reduction via advanced indexing
        # e.g ababbac->abc
        for selector in diag_sels:
            x = x[selector]

    if sum_axes is not None:
        # trivial removal of axes via summation
        # e.g. abc->c
        x = do("sum", x, sum_axes, like=backend)

    if perm is not None:
        # transpose to desired output
        # e.g. abc->cba
        x = do("transpose", x, perm, like=backend)

    return x


def _do_contraction_via_bmm(
    a,
    b,
    eq_a,
    eq_b,
    new_shape_a,
    new_shape_b,
    new_shape_ab,
    perm_ab,
    pure_multiplication,
    backend,
):
    # prepare left
    if eq_a is not None:
        if isinstance(eq_a, tuple):
            # only transpose
            a = do("transpose", a, eq_a, like=backend)
        else:
            # diagonals, sums, and tranpose
            a = _einsum_single(eq_a, a)
    if new_shape_a is not None:
        a = do("reshape", a, new_shape_a, like=backend)

    # prepare right
    if eq_b is not None:
        if isinstance(eq_b, tuple):
            # only transpose
            b = do("transpose", b, eq_b, like=backend)
        else:
            # diagonals, sums, and tranpose
            b = _einsum_single(eq_b, b)
    if new_shape_b is not None:
        b = do("reshape", b, new_shape_b, like=backend)

    if pure_multiplication:
        # no contracted indices
        return do("multiply", a, b)

    # do the contraction!
    ab = do("matmul", a, b, like=backend)

    # prepare the output
    if new_shape_ab is not None:
        ab = do("reshape", ab, new_shape_ab, like=backend)
    if perm_ab is not None:
        ab = do("transpose", ab, perm_ab, like=backend)

    return ab


def einsum(eq, a, b=None, *, backend=None):
    """Perform arbitrary single and pairwise einsums using only `matmul`,
    `transpose`, `reshape` and `sum`.  The logic for each is cached based on
    the equation and array shape, and each step is only performed if necessary.

    Parameters
    ----------
    eq : str
        The einsum equation.
    a : array_like
        The first array to contract.
    b : array_like, optional
        The second array to contract.
    backend : str, optional
        The backend to use for array operations. If ``None``, dispatch
        automatically based on ``a`` and ``b``.

    Returns
    -------
    array_like
    """
    if b is None:
        return _einsum_single(eq, a, backend=backend)

    (
        eq_a,
        eq_b,
        new_shape_a,
        new_shape_b,
        new_shape_ab,
        perm_ab,
        pure_multiplication,
    ) = _parse_eq_to_batch_matmul(eq, shape(a), shape(b))

    return _do_contraction_via_bmm(
        a,
        b,
        eq_a,
        eq_b,
        new_shape_a,
        new_shape_b,
        new_shape_ab,
        perm_ab,
        pure_multiplication,
        backend,
    )


def gen_nice_inds():
    """Generate the indices from [a-z, A-Z, reasonable unicode...]."""
    for i in range(26):
        yield chr(ord("a") + i)
    for i in range(26):
        yield chr(ord("A") + i)
    for i in itertools.count(192):
        yield chr(i)


@functools.lru_cache(2**12)
def _parse_tensordot_axes_to_matmul(axes, shape_a, shape_b):
    """Parse a tensordot specification into the necessary sequence of arguments
    for contracttion via matrix multiplication. This just converts ``axes``
    into an ``einsum`` eq string then calls ``_parse_eq_to_batch_matmul``.
    """
    ndim_a = len(shape_a)
    ndim_b = len(shape_b)

    if isinstance(axes, int):
        axes_a = tuple(range(ndim_a - axes, ndim_a))
        axes_b = tuple(range(axes))
    else:
        axes_a, axes_b = axes

    num_con = len(axes_a)
    if num_con != len(axes_b):
        raise ValueError(
            f"Axes should have the same length, got {axes_a} and {axes_b}."
        )

    possible_inds = gen_nice_inds()
    inds_a = [next(possible_inds) for _ in range(ndim_a)]
    inds_b = []
    inds_out = inds_a.copy()

    for axb in range(ndim_b):
        if axb not in axes_b:
            # right uncontracted index
            ind = next(possible_inds)
            inds_out.append(ind)
        else:
            # contracted index
            axa = axes_a[axes_b.index(axb)]
            # check that the shapes match
            if shape_a[axa] != shape_b[axb]:
                raise ValueError(
                    f"Dimension mismatch between axes {axa} of {shape_a} and "
                    f"{axb} of {shape_b}: {shape_a[axa]} != {shape_b[axb]}."
                )
            ind = inds_a[axa]
            inds_out.remove(ind)
        inds_b.append(ind)

    eq = f"{''.join(inds_a)},{''.join(inds_b)}->{''.join(inds_out)}"

    return _parse_eq_to_batch_matmul(eq, shape_a, shape_b)


def tensordot(a, b, axes=2, *, backend=None):
    """Perform a tensordot using only `matmul`, `transpose`, `reshape`. The
    logic for each is cached based on the equation and array shape, and each
    step is only performed if necessary.

    Parameters
    ----------
    a, b : array_like
        The arrays to contract.
    axes : int or tuple of (sequence[int], sequence[int])
        The number of axes to contract, or the axes to contract. If an int,
        the last ``axes`` axes of ``a`` and the first ``axes`` axes of ``b``
        are contracted. If a tuple, the axes to contract for ``a`` and ``b``
        respectively.
    backend : str or None, optional
        The backend to use for array operations. If ``None``, dispatch
        automatically based on ``a`` and ``b``.

    Returns
    -------
    array_like
    """
    try:
        # ensure hashable
        axes = tuple(map(int, axes[0])), tuple(map(int, axes[1]))
    except IndexError:
        axes = int(axes)

    (
        eq_a,
        eq_b,
        new_shape_a,
        new_shape_b,
        new_shape_ab,
        perm_ab,
        pure_multiplication,
    ) = _parse_tensordot_axes_to_matmul(axes, shape(a), shape(b))

    return _do_contraction_via_bmm(
        a,
        b,
        eq_a,
        eq_b,
        new_shape_a,
        new_shape_b,
        new_shape_ab,
        perm_ab,
        pure_multiplication,
        backend,
    )


def extract_contractions(
    tree,
    order=None,
    prefer_einsum=False,
):
    """Extract just the information needed to perform the contraction.

    Parameters
    ----------
    order : str or callable, optional
        Supplied to :meth:`ContractionTree.traverse`.
    prefer_einsum : bool, optional
        Prefer to use ``einsum`` for pairwise contractions, even if
        ``tensordot`` can perform the contraction.

    Returns
    -------
    contractions : tuple
        A tuple of tuples, each containing the information needed to
        perform a pairwise contraction. Each tuple contains:

            - ``p``: the parent node,
            - ``l``: the left child node,
            - ``r``: the right child node,
            - ``tdot``: whether to use ``tensordot`` or ``einsum``,
            - ``arg``: the argument to pass to ``tensordot`` or ``einsum``
                i.e. ``axes`` or ``eq``,
            - ``perm``: the permutation required after the contraction, if
                any (only applies to tensordot).

        If both ``l`` and ``r`` are ``None``, the the operation is a single
        term simplification performed with ``einsum``.
    """
    if tree.N == 1:
        # trivial 'contraction', single input maps directly to output
        # possibly with reductions/transpose, setting l but not r flags this
        pi = 1
        li = 0
        ri = None
        tdot = False
        arg = tree.get_eq_sliced()
        perm = None
        return [(pi, li, ri, tdot, arg, perm)]

    contractions = []

    # for compactness we convert nodes to ssa indices
    ssas = {leaf: i for i, leaf in enumerate(tree.gen_leaves())}
    ssa = len(ssas)

    # pairwise contractions
    for p, l, r in tree.traverse(order=order):
        li = ssas.pop(l)
        ri = ssas.pop(r)
        pi = ssas[p] = ssa
        ssa += 1

        if prefer_einsum or not tree.get_can_dot(p):
            tdot = False
            arg = tree.get_einsum_eq(p)
            perm = None
        else:
            tdot = True
            arg = tree.get_tensordot_axes(p)
            perm = tree.get_tensordot_perm(p)

        contractions.append((pi, li, ri, tdot, arg, perm))

    if tree.preprocessing:
        # inplace single term simplifications
        # n.b. these are populated lazily when the other information is
        # computed above, so we do it after
        pre_contractions = (
            (i, None, None, False, eq, None)
            for i, eq in tree.preprocessing.items()
        )
        return (*pre_contractions, *contractions)

    return tuple(contractions)


class Contractor:
    """Default cotengra network contractor.

    Parameters
    ----------
    contractions : tuple[tuple]
        The sequence of contractions to perform. Each contraction should be a
        tuple containing:

            - ``p``: the parent node,
            - ``l``: the left child node,
            - ``r``: the right child node,
            - ``tdot``: whether to use ``tensordot`` or ``einsum``,
            - ``arg``: the argument to pass to ``tensordot`` or ``einsum``
                i.e. ``axes`` or ``eq``,
            - ``perm``: the permutation required after the contraction, if
                any (only applies to tensordot).

        e.g. built by calling ``extract_contractions(tree)``.

    strip_exponent : bool, optional
        If ``True``, eagerly strip the exponent (in log10) from
        intermediate tensors to control numerical problems from leaving the
        range of the datatype. This method then returns the scaled
        'mantissa' output array and the exponent separately.
    check_zero : bool, optional
        If ``True``, when ``strip_exponent=True``, explicitly check for
        zero-valued intermediates that would otherwise produce ``nan``,
        instead terminating early if encounteredand returning
        ``(0.0, 0.0)``.
    backend : str, optional
        What library to use for ``tensordot``, ``einsum`` and
        ``transpose``, it will be automatically inferred from the input
        arrays if not given.
    progbar : bool, optional
        Whether to show a progress bar.
    """

    __slots__ = (
        "contractions",
        "strip_exponent",
        "check_zero",
        "implementation",
        "backend",
        "progbar",
        "__weakref__",
    )

    def __init__(
        self,
        contractions,
        strip_exponent=False,
        check_zero=False,
        implementation="auto",
        backend=None,
        progbar=False,
    ):
        self.contractions = contractions
        self.strip_exponent = strip_exponent
        self.check_zero = check_zero
        self.implementation = implementation
        self.backend = backend
        self.progbar = progbar

    def __call__(self, *arrays, **kwargs):
        """Contract ``arrays`` using operations listed in ``contractions``.

        Parameters
        ----------
        arrays : sequence of array-like
            The arrays to contract.
        kwargs : dict
            Override the default settings for this contraction only.

        Returns
        -------
        output : array
            The contracted output, it will be scaled if ``strip_exponent==True``.
        exponent : float
            The exponent of the output in base 10, returned only if
            ``strip_exponent==True``.
        """
        backend = kwargs.pop("backend", self.backend)
        progbar = kwargs.pop("progbar", self.progbar)
        check_zero = kwargs.pop("check_zero", self.check_zero)
        strip_exponent = kwargs.pop("strip_exponent", self.strip_exponent)
        implementation = kwargs.pop("implementation", self.implementation)
        if kwargs:
            raise TypeError(f"Unknown keyword arguments: {kwargs}.")

        if backend is None:
            backend = infer_backend_multi(*arrays)
            xp = get_namespace(backend)

        if implementation == "auto":
            if backend == "numpy":
                # by default only replace numpy's einsum/tensordot
                implementation = "cotengra"
            else:
                implementation = "autoray"

        if implementation == "cotengra":
            _einsum, _tensordot = einsum, tensordot
        elif implementation == "pytblis":
            import pytblis

            _einsum = pytblis.einsum
            _tensordot = pytblis.tensordot
        elif implementation == "autoray":
            try:
                _einsum = xp.einsum
            except ImportError:
                # fallback to cotengra (matmul) implementation
                _einsum = einsum

            try:
                _tensordot = xp.tensordot
            except ImportError:
                # fallback to cotengra (matmul) implementation
                _tensordot = tensordot
        else:
            # manually supplied
            _einsum, _tensordot = implementation

        # temporary storage for intermediates
        N = len(arrays)
        temps = dict(enumerate(arrays))

        exponent = 0.0 if (strip_exponent is not False) else None

        if progbar:
            import tqdm

            contractions = tqdm.tqdm(self.contractions, total=N - 1)
        else:
            contractions = self.contractions

        for pi, li, ri, tdot, arg, perm in contractions:
            if ri is None:
                if li is None:
                    # single term simplification, perform inplace with einsum
                    temps[pi] = _einsum(arg, temps[pi])
                    continue
                else:
                    # trivial 'contraction', single input maps directly to
                    # output, possibly with reductions/transpose via einsum
                    p_array = _einsum(arg, temps[li])
                    if strip_exponent:
                        return p_array, 0.0
                    return p_array

            # get input arrays for this contraction
            l_array = temps.pop(li)
            r_array = temps.pop(ri)

            if tdot:
                p_array = _tensordot(l_array, r_array, arg)
                if perm:
                    p_array = xp.transpose(p_array, perm)
            else:
                p_array = _einsum(arg, l_array, r_array)

            if exponent is not None:
                factor = xp.max(xp.abs(p_array))

                if check_zero and float(factor) == 0.0:
                    return 0.0, float("-inf")
                exponent = exponent + xp.log10(factor)

                if backend == "tensorflow":
                    factor = xp.astype(factor, p_array.dtype)
                    # TODO:
                    # currently special case tensorflow
                    # autoray needs fix for autojit and astype to use generally

                p_array = p_array / factor

            # insert the new intermediate array
            temps[pi] = p_array

        if exponent is not None:
            return p_array, exponent

        return p_array


class CuQuantumContractor:
    def __init__(
        self,
        tree,
        handle_slicing=False,
        autotune=False,
        **kwargs,
    ):
        if kwargs.pop("strip_exponent", None):
            raise ValueError(
                "strip_exponent=True not supported with cuQuantum"
            )

        if tree.has_preprocessing():
            raise ValueError("Preprocessing not supported with cuQuantum yet.")

        if kwargs.pop("progbar", None):
            import warnings

            warnings.warn("Progress bar not supported with cuQuantum yet.")

        if handle_slicing:
            self.eq = tree.get_eq()
            self.shapes = tree.get_shapes()
        else:
            self.eq = tree.get_eq_sliced()
            self.shapes = tree.get_shapes_sliced()

        if tree.is_complete():
            kwargs.setdefault("optimize", {})
            kwargs["optimize"].setdefault("path", tree.get_path())

            if handle_slicing and tree.sliced_inds:
                kwargs["optimize"].setdefault(
                    "slicing",
                    [(ix, tree.size_dict[ix] - 1) for ix in tree.sliced_inds],
                )

        self.kwargs = kwargs
        self.autotune = 3 if autotune is True else autotune
        self.handle = None
        self.network = None

    def setup(self, *arrays):
        import cuquantum

        if hasattr(cuquantum, "bindings"):
            # cuquantum-python >= 25.03
            from cuquantum.tensornet import Network
        else:
            # for cuquantum < 25.03
            from cuquantum import Network

        self.network = Network(
            self.eq,
            *arrays,
        )
        self.network.contract_path(**self.kwargs)
        if self.autotune:
            self.network.autotune(iterations=self.autotune)

    def __call__(
        self,
        *arrays,
        check_zero=False,
        backend=None,
        progbar=False,
    ):
        # can't handle these yet
        assert not check_zero
        assert not progbar
        assert backend is None

        if self.network is None:
            self.setup(*arrays)
        else:
            self.network.reset_operands(*arrays)

        return self.network.contract()

    def __del__(self):
        if self.network is not None:
            self.network.free()


def make_contractor(
    tree,
    order=None,
    prefer_einsum=False,
    strip_exponent=False,
    check_zero=False,
    implementation=None,
    autojit=False,
    progbar=False,
):
    """Get a reusable function which performs the contraction corresponding
    to ``tree``. The various options provide defaults that can also be overrode
    when calling the standard contractor.

    Parameters
    ----------
    tree : ContractionTree
        The contraction tree.
    order : str or callable, optional
        Supplied to :meth:`ContractionTree.traverse`, the order in which
        to perform the pairwise contractions given by the tree.
    prefer_einsum : bool, optional
        Prefer to use ``einsum`` for pairwise contractions, even if
        ``tensordot`` can perform the contraction.
    strip_exponent : bool, optional
        If ``True``, the function will strip the exponent from the output
        array and return it separately.
    check_zero : bool, optional
        If ``True``, when ``strip_exponent=True``, explicitly check for
        zero-valued intermediates that would otherwise produce ``nan``,
        instead terminating early if encountered and returning
        ``(0.0, 0.0)``.
    implementation : str or tuple[callable, callable], optional
        What library to use to actually perform the contractions. Options are

        - "auto": let cotengra choose
        - "autoray": dispatch with autoray, using the ``tensordot`` and
          ``einsum`` implementation of the backend
        - "cotengra": use the ``tensordot`` and ``einsum`` implementation of
          cotengra, which is based on batch matrix multiplication. This is
          faster for some backends like numpy, and also enables libraries
          which don't yet provide ``tensordot`` and ``einsum`` to be used.
        - "cuquantum": use the cuquantum library to perform the whole
          contraction (not just individual contractions).
        - tuple[callable, callable]: manually supply the ``tensordot`` and
          ``einsum`` implementations to use.

    autojit : bool, optional
        If ``True``, use :func:`autoray.autojit` to compile the contraction
        function.
    progbar : bool, optional
            Whether to show progress through the contraction by default.

    Returns
    -------
    fn : callable
        The contraction function, with signature ``fn(*arrays)``.
    """
    if implementation is None:
        implementation = get_default_implementation()

    if implementation == "cuquantum":
        fn = CuQuantumContractor(
            tree,
            strip_exponent=strip_exponent,
            check_zero=check_zero,
            progbar=progbar,
        )
    else:
        fn = Contractor(
            contractions=extract_contractions(tree, order, prefer_einsum),
            strip_exponent=strip_exponent,
            check_zero=check_zero,
            implementation=implementation,
            progbar=progbar,
        )
        if autojit:
            from autoray import autojit as _autojit

            fn = _autojit(fn)

    return fn


================================================
FILE: cotengra/core.py
================================================
"""Core contraction tree data structure and methods."""

import collections
import functools
import itertools
import math
import warnings
from dataclasses import dataclass
from typing import Optional

from autoray import do, get_namespace, infer_backend

from .contract import make_contractor
from .hypergraph import get_hypergraph
from .nodeops import get_nodeops
from .parallel import (
    can_scatter,
    maybe_leave_pool,
    maybe_rejoin_pool,
    parse_parallel_arg,
    scatter,
    submit,
)
from .pathfinders.path_simulated_annealing import (
    parallel_temper_tree,
    simulated_anneal_tree,
)
from .plot import (
    plot_contractions,
    plot_contractions_alt,
    plot_hypergraph,
    plot_tree_circuit,
    plot_tree_flat,
    plot_tree_ring,
    plot_tree_rubberband,
    plot_tree_span,
    plot_tree_tent,
)
from .scoring import (
    DEFAULT_COMBO_FACTOR,
    CompressedStatsTracker,
    get_score_fn,
)
from .utils import (
    MaxCounter,
    compute_size_by_dict,
    deprecated,
    get_rng,
    get_symbol,
    groupby,
    inputs_output_to_eq,
    interleave,
    oset,
    prod,
    unique,
)


def cached_node_property(name):
    """Decorator for caching information about nodes."""

    def wrapper(meth):
        @functools.wraps(meth)
        def getter(self, node):
            try:
                return self.info[node][name]
            except KeyError:
                self.info[node][name] = value = meth(self, node)
                return value

        return getter

    return wrapper


def legs_union(legs_seq):
    """Combine a sequence of legs into a single set of legs, summing their
    appearances.
    """
    new_legs, *rem_legs = legs_seq
    new_legs = new_legs.copy()
    for legs in rem_legs:
        for ix, ix_count in legs.items():
            new_legs[ix] = new_legs.get(ix, 0) + ix_count
    return new_legs


def legs_without(legs, ind):
    """Discard ``ind`` from legs to create a new set of legs."""
    new_legs = legs.copy()
    new_legs.pop(ind, None)
    return new_legs


def get_with_default(k, obj, default):
    return obj.get(k, default)


@dataclass(order=True, frozen=True)
class SliceInfo:
    inner: bool
    ind: str
    size: int
    project: Optional[int]

    @property
    def sliced_range(self):
        if self.project is None:
            return range(self.size)
        else:
            return [self.project]


def get_slice_strides(sliced_inds):
    """Compute the 'strides' given the (ordered) dictionary of sliced indices."""
    slice_infos = list(sliced_inds.values())
    nsliced = len(slice_infos)
    strides = [1] * nsliced
    # backwards cumulative product
    for i in range(nsliced - 2, -1, -1):
        strides[i] = strides[i + 1] * slice_infos[i + 1].size
    return strides


class AdderWithMaybeExponentStripped:
    """Object that ddds two arrays, or tuples of (array, exponent) together in
    a stable and branchless way. It also internally caches the backend on the
    first call.
    """

    __slots__ = ("backend", "namespace", "need_to_cast")

    def __init__(self):
        self.backend = None
        self.namespace = None
        self.need_to_cast = False

    def __call__(self, x, y):
        xistup = isinstance(x, tuple)
        yistup = isinstance(y, tuple)
        if not (xistup or yistup):
            # simple sum without exponent
            return x + y

        if xistup:
            xm, xe = x
        else:
            xm = x
            xe = 0.0

        if yistup:
            ym, ye = y
        else:
            ym = y
            ye = 0.0

        if self.backend is None:
            self.backend = infer_backend(xm)
            self.namespace = get_namespace(self.backend)
            self.need_to_cast = self.backend == "tensorflow"

        # perform branchless for jit etc.
        e = max(xe, ye)

        if self.need_to_cast:
            xcoeff = self.namespace.astype(10.0 ** (xe - e), xm.dtype)
            ycoeff = self.namespace.astype(10.0 ** (ye - e), ym.dtype)
            m = xm * xcoeff + ym * ycoeff
        else:
            m = xm * 10 ** (xe - e) + ym * 10 ** (ye - e)

        return (m, e)


class ContractionTree:
    """Binary tree representing a tensor network contraction.

    Parameters
    ----------
    inputs : sequence of str
        The list of input tensor's indices.
    output : str
        The output indices.
    size_dict : dict[str, int]
        The size of each index.
    track_childless : bool, optional
        Whether to dynamically keep track of which nodes are childless. Useful
        if you are 'divisively' building the tree.
    track_flops : bool, optional
        Whether to dynamically keep track of the total number of flops. If
        ``False`` You can still compute this once the tree is complete.
    track_write : bool, optional
        Whether to dynamically keep track of the total number of elements
        written. If ``False`` You can still compute this once the tree is
        complete.
    track_size : bool, optional
        Whether to dynamically keep track of the largest tensor so far. If
        ``False`` You can still compute this once the tree is complete.
    objective : str or Objective, optional
        An default objective function to use for further optimization and
        scoring, for example reconfiguring or computing the combo cost. If not
        supplied the default is to create a flops objective when needed.

    Attributes
    ----------
    children : dict[node, tuple[node]]
        Mapping of each node to two children.
    info : dict[node, dict]
        Information about the tree nodes. The key is the set of inputs (a
        set of inputs indices) the node contains. Or in other words, the
        subgraph of the node. The value is a dictionary to cache information
        about effective 'leg' indices, size, flops of formation etc.
    """

    def __init__(
        self,
        inputs,
        output,
        size_dict,
        track_childless=False,
        track_flops=False,
        track_write=False,
        track_size=False,
        objective=None,
        nodeops="auto",
    ):
        self.inputs = inputs
        self.output = output

        if isinstance(self.inputs[0], set) or isinstance(self.output, set):
            warnings.warn(
                "The inputs or output of this tree are not ordered."
                "Costs will be accurate but actually contracting requires "
                "ordered indices corresponding to array axes."
            )

        if not isinstance(next(iter(size_dict.values()), 1), int):
            # make sure we are working with python integers to avoid overflow
            # comparison errors with inf etc.
            self.size_dict = {k: int(v) for k, v in size_dict.items()}
        else:
            self.size_dict = size_dict

        self.N = len(self.inputs)

        # the index representation for each input is an ordered mapping of
        # each index to the number of times it has appeared on children. By
        # also tracking the total number of appearances one can efficiently
        # and locally compute which indices should be kept or contracted
        self.appearances = {}
        for term in self.inputs:
            for ix in term:
                self.appearances[ix] = self.appearances.get(ix, 0) + 1
        # adding output appearances ensures these are never contracted away,
        # N.B. if after this step every appearance count is exactly 2,
        # then there are no 'hyper' indices in the contraction
        for ix in self.output:
            self.appearances[ix] = self.appearances.get(ix, 0) + 1

        # this stores potentialy preprocessing steps that are not part of the
        # main contraction tree, but assumed to have been applied, for example
        # tracing or summing over indices that appear only once
        self.preprocessing = {}

        # mapping of parents to children - the core binary tree object
        self.children = {}

        # information about all the nodes
        self.nodeops = get_nodeops(nodeops, self.N)
        self.info = {}

        # add constant nodes: the leaves
        for leaf in self.gen_leaves():
            self._add_node(leaf, extent=1)  # leaf extent is always 1

        # and the root or top node
        self.root = self.nodeops.node_supremum(self.N)
        self._add_node(self.root, extent=self.N)  # root extent is always N

        if self.N == 1:
            # trivial 'contraction', single input maps directly to output,
            self.children[self.root] = (leaf,)

        # whether to keep track of dangling nodes/subgraphs
        self.track_childless = track_childless
        if self.track_childless:
            # the set of dangling nodes
            self.childless = oset([self.root])

        # running largest_intermediate and total flops
        self._track_flops = track_flops
        if track_flops:
            self._flops = 0

        self._track_write = track_write
        if track_write:
            self._write = 0

        self._track_size = track_size
        if track_size:
            self._sizes = MaxCounter()

        # container for caching subtree reconfiguration condidates
        self.already_optimized = dict()

        # info relating to slicing (base constructor is always unsliced)
        self.multiplicity = 1
        self.sliced_inds = {}
        self.sliced_inputs = frozenset()

        # cache for compiled contraction cores
        self.contraction_cores = {}

        # a default objective function useful for
        # further optimization and scoring
        self._default_objective = objective

    def set_state_from(self, other):
        """Set the internal state of this tree to that of ``other``."""
        # immutable or never mutated properties
        for attr in (
            "appearances",
            "inputs",
            "multiplicity",
            "N",
            "output",
            "root",
            "size_dict",
            "sliced_inputs",
            "_default_objective",
        ):
            setattr(self, attr, getattr(other, attr))

        # mutable properties
        for attr in (
            "children",
            "contraction_cores",
            "nodeops",
            "sliced_inds",
            "preprocessing",
        ):
            setattr(self, attr, getattr(other, attr).copy())

        # dicts of mutable
        for attr in ("info", "already_optimized"):
            setattr(
                self,
                attr,
                {k: v.copy() for k, v in getattr(other, attr).items()},
            )

        self.track_childless = other.track_childless
        if other.track_childless:
            self.childless = other.childless.copy()

        self._track_flops = other._track_flops
        if other._track_flops:
            self._flops = other._flops

        self._track_write = other._track_write
        if other._track_write:
            self._write = other._write

        self._track_size = other._track_size
        if other._track_size:
            self._sizes = other._sizes.copy()

    def copy(self):
        """Create a copy of this ``ContractionTree``."""
        tree = object.__new__(self.__class__)
        tree.set_state_from(self)
        return tree

    def set_default_objective(self, objective):
        """Set the objective function for this tree."""
        self._default_objective = get_score_fn(objective)

    def get_default_objective(self):
        """Get the objective function for this tree."""
        if self._default_objective is None:
            self._default_objective = get_score_fn("flops")
        return self._default_objective

    def get_default_combo_factor(self):
        """Get the default combo factor for this tree."""
        objective = self.get_default_objective()
        try:
            return objective.factor
        except AttributeError:
            return DEFAULT_COMBO_FACTOR

    def get_score(self, objective=None):
        """Score this tree using the default objective function."""
        from .scoring import get_score_fn

        if objective is None:
            objective = self.get_default_objective()

        objective = get_score_fn(objective)

        return objective({"tree": self})

    @property
    def nslices(self):
        """Simple alias for how many independent contractions this tree
        represents overall.
        """
        return self.multiplicity

    @property
    def nchunks(self):
        """The number of 'chunks' - determined by the number of sliced output
        indices.
        """
        return prod(
            si.size for si in self.sliced_inds.values() if not si.inner
        )

    def input_to_node(self, i):
        """Create a node from a single input index, i.e. the subgraph that
        only contains the input tensor ``i``.

        Parameters
        ----------
        i : int
            The input index.

        Returns
        -------
        node : node_type
        """
        return self.nodeops.node_from_single(i)

    def node_to_input(self, node):
        """Assuming ``node`` has one element, i.e. is a leaf, return the
        corresponding input index.

        Parameters
        ----------
        node : node_type
            The node to convert.

        Returns
        -------
        i : int
        """
        return self.nodeops.node_get_single_el(node)

    def node_to_terms(self, node):
        """Turn a node into the corresponding terms a sequence of leaf legs,
        corresponding to input indices.
        """
        return (
            self.get_legs(self.input_to_node(i))
            for i in self.get_subgraph(node)
        )

    def gen_leaves(self):
        """Generate the nodes representing leaves of the contraction tree, i.e.
        of size 1 each corresponding to a single input tensor.
        """
        return map(self.input_to_node, range(self.N))

    def get_incomplete_nodes(self):
        """Get the set of current nodes that have no children and the set of
        nodes that have no parents. These are the 'childless' and 'parentless'
        nodes respectively, that need to be contracted to complete the tree.
        The parentless nodes are grouped into the childless nodes that contain
        them as subgraphs.

        Returns
        -------
        groups : dict[node_type, list[node_type]]
            A mapping of childless nodes to the list of parentless nodes are
            beneath them.

        See Also
        --------
        autocomplete
        """
        childless = dict.fromkeys(
            node
            for node in self.info
            # start wth all but leaves
            if not self.is_leaf(node)
        )
        parentless = dict.fromkeys(
            node
            for node in self.info
            # start with all but root
            if not self.is_root(node)
        )
        for p, (l, r) in self.children.items():
            parentless.pop(l)
            parentless.pop(r)
            childless.pop(p)

        groups = {node: [] for node in childless}
        for node in parentless:
            # get the smallest node that contains this node
            ancestor = min(
                (
                    possible_parent
                    for possible_parent in childless
                    if set(self.get_subgraph(node)).issubset(
                        self.get_subgraph(possible_parent)
                    )
                    # XXX: for non-ssa node types could do:
                    # if self.is_descendant(node, possible_parent)
                ),
                key=self.get_extent,
            )
            groups[ancestor].append(node)

        return groups

    def autocomplete(self, **contract_opts):
        """Contract all remaining node groups (as computed by
        ``tree.get_incomplete_nodes``) in the tree to complete it.

        Parameters
        ----------
        contract_opts
            Options to pass to ``tree.contract_nodes``.

        See Also
        --------
        get_incomplete_nodes, contract_nodes
        """
        groups = self.get_incomplete_nodes()
        for grandparent, parentless_subnodes in groups.items():
            self.contract_nodes(
                parentless_subnodes, grandparent=grandparent, **contract_opts
            )

    @classmethod
    def from_path(
        cls,
        inputs,
        output,
        size_dict,
        *,
        path=None,
        ssa_path=None,
        edge_path=None,
        optimize="auto",
        autocomplete="auto",
        check=False,
        **kwargs,
    ):
        """Create a (completed) ``ContractionTree`` from the usual inputs plus
        a standard contraction path or 'ssa_path' - you need to supply one.

        Parameters
        ----------
        inputs : Sequence[Sequence[str]]
            The input indices of each tensor, as single unicode characters.
        output : Sequence[str]
            The output indices.
        size_dict : dict[str, int]
            The size of each index.
        path : Sequence[Sequence[int]], optional
            The contraction path, a sequence of pairs of tensor ids to
            contract. The ids are linear indices into the list of temporary
            tensors, which are recycled as each contraction pops a pair and
            appends the result. One of ``path``, ``ssa_path`` or ``edge_path``
            must be supplied.
        ssa_path : Sequence[Sequence[int]], optional
            The contraction path, a sequence of pairs of indices to contract.
            The indices are single use, as if the result of each contraction is
            appended to the end of the list of temporary tensors without
            popping. One of ``path``, ``ssa_path`` or ``edge_path`` must be
            supplied.
        edge_path : Sequence[str], optional
            The contraction path, a sequence of indices to contract in order.
            One of ``path``, ``ssa_path`` or ``edge_path`` must be supplied.
        optimize : str, optional
            If a contraction within the path contains 3 or more tensors, how to
            optimize this subcontraction into a binary tree.
        autocomplete : "auto" or bool, optional
            Whether to automatically complete the path, i.e. contract all
            remaining nodes. If "auto" then a warning is issued if the path is
            not complete.
        check : bool, optional
            Whether to perform some basic checks while creating the contraction
            nodes.

        Returns
        -------
        ContractionTree
        """
        if (path is None) + (ssa_path is None) + (edge_path is None) != 2:
            raise ValueError(
                "Exactly one of ``path`` or ``ssa_path`` must be supplied."
            )

        contract_opts = {"optimize": optimize, "check": check}

        if edge_path is not None:
            from .pathfinders.path_basic import edge_path_to_ssa

            ssa_path = edge_path_to_ssa(edge_path, inputs)

        if ssa_path is not None:
            path = ssa_path

        tree = cls(inputs, output, size_dict, **kwargs)

        if ssa_path is not None:
            # ssa path ('single static assignment' ids)
            nodes = dict(enumerate(tree.gen_leaves()))
            ssa = len(nodes)
            for p in path:
                merge = [nodes.pop(i) for i in p]
                nodes[ssa] = tree.contract_nodes(merge, **contract_opts)
                ssa += 1
            nodes = nodes.values()
        else:
            # regular path ('recycled' ids)
            nodes = list(tree.gen_leaves())
            for p in path:
                merge = [nodes.pop(i) for i in sorted(p, reverse=True)]
                nodes.append(tree.contract_nodes(merge, **contract_opts))

        if len(nodes) > 1 and autocomplete:
            if autocomplete == "auto":
                # warn that we are completing
                warnings.warn(
                    "Path was not complete - contracting all remaining. "
                    "You can silence this warning with `autocomplete=True`."
                    "Or produce an incomplete tree with `autocomplete=False`."
                )

            tree.contract_nodes(nodes, grandparent=tree.root, **contract_opts)

        return tree

    @classmethod
    def from_info(cls, info, **kwargs):
        """Create a ``ContractionTree`` from an ``opt_einsum.PathInfo`` object."""
        return cls.from_path(
            inputs=info.input_subscripts.split(","),
            output=info.output_subscript,
            size_dict=info.size_dict,
            path=info.path,
            **kwargs,
        )

    @classmethod
    def from_eq(cls, eq, size_dict, **kwargs):
        """Create a empty ``ContractionTree`` directly from an equation and set
        of shapes.

        Parameters
        ----------
        eq : str
            The einsum string equation.
        size_dict : dict[str, int]
            The size of each index.
        """
        lhs, output = eq.split("->")
        inputs = lhs.split(",")
        return cls(inputs, output, size_dict, **kwargs)

    def get_eq(self):
        """Get the einsum equation corresponding to this tree. Note that this
        is the total (or original) equation, so includes indices which have
        been sliced.

        Returns
        -------
        eq : str
        """
        return inputs_output_to_eq(self.inputs, self.output)

    def get_shapes(self):
        """Get the shapes of the input tensors corresponding to this tree.

        Returns
        -------
        shapes : tuple[tuple[int]]
        """
        return tuple(
            tuple(self.size_dict[ix] for ix in term) for term in self.inputs
        )

    def get_inputs_sliced(self):
        """Get the input indices corresponding to a single slice of this tree,
        i.e. with sliced indices removed.

        Returns
        -------
        inputs : tuple[tuple[str]]
        """
        return tuple(
            tuple(ix for ix in term if ix not in self.sliced_inds)
            for term in self.inputs
        )

    def get_output_sliced(self):
        """Get the output indices corresponding to a single slice of this tree,
        i.e. with sliced indices removed.

        Returns
        -------
        output : tuple[str]
        """
        return tuple(ix for ix in self.output if ix not in self.sliced_inds)

    def get_eq_sliced(self):
        """Get the einsum equation corresponding to a single slice of this
        tree, i.e. with sliced indices removed.

        Returns
        -------
        eq : str
        """
        return inputs_output_to_eq(
            self.get_inputs_sliced(), self.get_output_sliced()
        )

    def get_shapes_sliced(self):
        """Get the shapes of the input tensors corresponding to a single slice
        of this tree, i.e. with sliced indices removed.

        Returns
        -------
        shapes : tuple[tuple[int]]
        """
        return tuple(
            tuple(
                self.size_dict[ix] for ix in term if ix not in self.sliced_inds
            )
            for term in self.inputs
        )

    @classmethod
    def from_edge_path(
        cls,
        edge_path,
        inputs,
        output,
        size_dict,
        optimize="auto",
        autocomplete="auto",
        check=False,
        **kwargs,
    ):
        """Create a ``ContractionTree`` from an edge elimination ordering."""
        warnings.warn(
            "ContractionTree.from_edge_path(edge_path, ...) is deprecated. Use"
            " ContractionTree.from_path(edge_path=edge_path, ...) instead."
        )
        return cls.from_path(
            inputs,
            output,
            size_dict,
            edge_path=edge_path,
            optimize=optimize,
            autocomplete=autocomplete,
            check=check,
            **kwargs,
        )

    def _add_node(self, node, check=False, **kwargs):
        """Add a node to this tree, specified either directly as a existing
        node type, or as a subgraph (i.e. a sequence of input positions) which
        is then converted to a node with the corresponding extent and subgraph
        information.

        Note if "ssa" nodes are used, then adding two equivalent subgraphs
        will result in *two* new nodes, since the node labels do not
        themselves encode the subgraph information.

        Parameters
        ----------
        node : node_type or Sequence[int]
            The node to add, either directly as a node type, or as a subgraph
            specified by the sequence of input positions it contains.
        check : bool, optional
            Whether to perform some basic checks on the node and tree state
            before adding the node.
        kwargs : dict, optional
            Additional information to cache about this node, for example its
            'extent' or 'subgraph'. If it is being specified as a sequence of
            input positions, these two will be injected automatically.

        Returns
        -------
        node : node_type
             The node that was added, which may be different from the input if
             the input was specified as a sequence of input positions.
        """
        # first we possibly convert from subgraph spec to node
        if not isinstance(node, self.nodeops.node_type):
            # assume node *has* been specified as sequence of input positions
            subgraph = tuple(node)

            if len(subgraph) == 1:
                # leaf node, for ssa we don't want to generate a new node
                (i,) = subgraph
                node = self.nodeops.node_from_single(i)
            elif len(subgraph) == self.N:
                # root node, for ssa we don't want to generate a new node
                node = self.root
            else:
                # intermediate, assume we can generate new node identifier
                node = self.nodeops.new_node_for_seq(subgraph)
                kwargs.setdefault("extent", len(subgraph))
                kwargs.setdefault("subgraph", subgraph)

        if check:
            if len(self.info) > 2 * self.N - 1:
                raise ValueError("There are too many children already.")
            if len(self.children) > self.N - 1:
                raise ValueError("There are too many branches already.")
            if not self.nodeops.is_valid_node(node):
                raise ValueError("{} is not a valid node.".format(node))

        try:
            d = self.info[node]
        except KeyError:
            d = self.info[node] = {}

        if kwargs:
            d.update(kwargs)

        return node

    def _remove_node(self, node):
        """Remove ``node`` from this tree and update the flops and maximum size
        if tracking them respectively, as well as input pre-processing.
        """
        node_extent = self.get_extent(node)

        if node_extent == 1:
            # leaf nodes should always exist
            self.info[node].clear()
            self.info[node]["extent"] = 1  # leaf extent is always 1
            # input: remove any associated preprocessing
            self.preprocessing.pop(self.nodeops.node_get_single_el(node), None)
        else:
            # only non-leaf nodes contribute to size, flops and write
            if self._track_size:
                self._sizes.discard(self.get_size(node))

            if self._track_flops:
                self._flops -= self.get_flops(node)

            if self._track_write:
                self._write -= self.get_size(node)

            del self.children[node]
            if node_extent == self.N:
                # root node should always exist
                self.info[node].clear()
                self.info[node]["extent"] = self.N  # root extent is always N
            else:
                del self.info[node]

    def compute_leaf_legs(self, i):
        """Compute the effective 'outer' indices for the ith input tensor. This
        is not always simply the ith input indices, due to A) potential slicing
        and B) potential preprocessing.
        """
        # indices of input tensor (after slicing which is done immediately)
        if self.sliced_inds:
            term = tuple(
                ix for ix in self.inputs[i] if ix not in self.sliced_inds
            )
        else:
            term = self.inputs[i]

        legs = {}
        for ix in term:
            legs[ix] = legs.get(ix, 0) + 1

        # check for single term simplifications, these are treated as a simple
        # preprocessing step that only is taken into account during actual
        # contraction, and are not represented in the binary tree
        # N.B. need to compute simplifiability *after* slicing
        is_simplifiable = (
            # repeated indices (diag or traces)
            (len(term) != len(legs))
            or
            # reduced indices (are summed immediately)
            any(
                ix_count == self.appearances[ix]
                for ix, ix_count in legs.items()
            )
        )

        if is_simplifiable:
            # compute the simplified legs -> the new effective input legs
            legs = {
                ix: ix_count
                for ix, ix_count in legs.items()
                if ix_count != self.appearances[ix]
            }
            # add a preprocessing step to the list of contractions
            eq = inputs_output_to_eq((term,), legs, canonicalize=True)
            self.preprocessing[i] = eq

        return legs

    def has_preprocessing(self):
        # touch all inputs legs, since preprocessing is lazily computed
        for node in self.gen_leaves():
            self.get_legs(node)
        return bool(self.preprocessing)

    def has_hyper_indices(self):
        """Check if there are any 'hyper' indices in the contraction, i.e.
        indices that don't appear exactly twice, when considering the inputs
        and output.
        """
        return any(ix_count != 2 for ix_count in self.appearances.values())

    @cached_node_property("extent")
    def get_extent(self, node):
        """Get the number of input tensors contained in the subgraph
        represented by ``node``.

        Parameters
        ----------
        node : node_type
            The node to compute the extent of.

        Returns
        -------
        extent : int
        """
        if node in self.children:
            l, r = self.children[node]
            return self.get_extent(l) + self.get_extent(r)
        else:
            return self.nodeops.node_size(node)

    @cached_node_property("subgraph")
    def get_subgraph(self, node) -> tuple[int, ...]:
        """Get the sequence of input tensors contained in subgraph represented
        by ``node``.

        Parameters
        ----------
        node : node_type
            The node to compute the subgraph of.

        Returns
        -------
        subgraph : tuple[int]
            The input tensor indices contained in this subgraph.
        """
        node_extent = self.get_extent(node)
        if node_extent == 1:
            return (self.nodeops.node_get_single_el(node),)
        elif node_extent == self.N:
            return tuple(range(self.N))
        else:
            try:
                left, right = self.children[node]
                return self.get_subgraph(left) + self.get_subgraph(right)
            except KeyError:
                # this should only happen if directly creating
                # incomplete nodes e.g. not in a bottom up fashion
                # ssa nodes e.g. will not support this operation
                return tuple(node)

    @cached_node_property("legs")
    def get_legs(self, node):
        """Get the effective 'outer' indices for the collection of tensors
        in ``node``.
        """
        # should this comparison be with self.N for efficiency?
        if node == self.root:
            # root legs are output, after slicing
            # n.b. the index counts are irrelevant for the output
            return {ix: 0 for ix in self.output if ix not in self.sliced_inds}

        node_extent = self.get_extent(node)

        if node_extent == 1:
            # leaf legs are inputs
            return self.compute_leaf_legs(
                self.nodeops.node_get_single_el(node)
            )

        try:
            involved = self.get_involved(node)
        except KeyError:
            # this should only happen if directly creating
            # incomplete nodes e.g. not in a bottom up fashion
            involved = legs_union(self.node_to_terms(node))

        return {
            ix: ix_count
            for ix, ix_count in involved.items()
            if ix_count < self.appearances[ix]
        }

    @cached_node_property("involved")
    def get_involved(self, node):
        """Get all the indices involved in the formation of subgraph ``node``."""
        if self.is_leaf(node):
            return {}
        sub_legs = map(self.get_legs, self.children[node])
        return legs_union(sub_legs)

    @cached_node_property("size")
    def get_size(self, node):
        """Get the tensor size of ``node``."""
        return compute_size_by_dict(self.get_legs(node), self.size_dict)

    @cached_node_property("flops")
    def get_flops(self, node):
        """Get the FLOPs for the pairwise contraction that will create
        ``node``.
        """
        if self.is_leaf(node):
            return 0
        involved = self.get_involved(node)
        return compute_size_by_dict(involved, self.size_dict)

    @cached_node_property("can_dot")
    def get_can_dot(self, node):
        """Get whether this contraction can be performed as a dot product (i.e.
        with ``tensordot``), or else requires ``einsum``, as it has indices
        that don't appear exactly twice in either the inputs or the output.
        """
        l, r = self.children[node]
        sp, sl, sr = map(self.get_legs, (node, l, r))
        return set(sp) == set(sl).symmetric_difference(sr)

    @cached_node_property("inds")
    def get_inds(self, node):
        """Get the indices of this node - an ordered string version of
        ``get_legs`` that starts with ``tree.inputs`` and maintains the order
        they appear in each contraction 'ABC,abc->ABCabc', to match tensordot.
        """
        # NB: self.inputs and self.output contain the full (unsliced) indices
        #     thus we filter even the input legs and output legs

        if self.get_extent(node) in (1, self.N):
            return "".join(self.get_legs(node))

        legs = self.get_legs(node)
        l_inds, r_inds = map(self.get_inds, self.children[node])
        # the filter here takes care of contracted indices
        return "".join(
            unique(filter(legs.__contains__, itertools.chain(l_inds, r_inds)))
        )

    @cached_node_property("tensordot_axes")
    def get_tensordot_axes(self, node):
        """Get the ``axes`` arg for a tensordot ocontraction that produces
        ``node``. The pairs are sorted in order of appearance on the left
        input.
        """
        l_inds, r_inds = map(self.get_inds, self.children[node])
        l_axes, r_axes = [], []
        for i, ind in enumerate(l_inds):
            j = r_inds.find(ind)
            if j != -1:
                l_axes.append(i)
                r_axes.append(j)
        return tuple(l_axes), tuple(r_axes)

    @cached_node_property("tensordot_perm")
    def get_tensordot_perm(self, node):
        """Get the permutation required, if any, to bring the tensordot output
        of this nodes contraction into line with ``self.get_inds(node)``.
        """
        l_inds, r_inds = map(self.get_inds, self.children[node])
        # the target output inds
        p_inds = self.get_inds(node)
        # the tensordot output inds
        td_inds = "".join(sorted(p_inds, key=f"{l_inds}{r_inds}".find))
        if td_inds == p_inds:
            return None
        return tuple(map(td_inds.find, p_inds))

    @cached_node_property("einsum_eq")
    def get_einsum_eq(self, node):
        """Get the einsum string describing the contraction that produces
        ``node``, unlike ``get_inds`` the characters are mapped into [a-zA-Z],
        for compatibility with ``numpy.einsum`` for example.
        """
        l, r = self.children[node]
        l_inds, r_inds, p_inds = map(self.get_inds, (l, r, node))
        # we need to map any extended unicode characters into ascii
        char_mapping = {
            ord(ix): get_symbol(i)
            for i, ix in enumerate(unique(itertools.chain(l_inds, r_inds)))
        }
        return f"{l_inds},{r_inds}->{p_inds}".translate(char_mapping)

    def is_leaf(self, node):
        """Check if ``node`` is a leaf node in this tree.

        Parameters
        ----------
        node : node_type
            The node to check.

        Returns
        -------
        is_leaf : bool
        """
        return self.nodeops.is_leaf(node)

    def is_root(self, node):
        """Check if ``node`` is the root node in this tree.

        Parameters
        ----------
        node : node_type
            The node to check.

        Returns
        -------
        is_root : bool
        """
        return self.nodeops.is_supremum(node, self.N)

    def is_descendant(self, node, ancestor):
        """Check if ``node`` is a descendant of ``ancestor`` in this tree.

        Parameters
        ----------
        node : node_type
            The node to check.
        ancestor : node_type
            The potential ancestor node.

        Returns
        -------
        is_descendant : bool
        """
        return self.nodeops.node_issubset(node, ancestor)

    def get_peak_size(self, node):
        """Get the peak size for all but only the contractions required to
        produce ``node``. The value for the root note will be the peak size of
        the entire contraction.

        Parameters
        ----------
        node : node_type
            The node to compute the peak size of.

        Returns
        -------
        peak_size : int
        """
        if self.is_leaf(node):
            # leaf node is input
            return self.get_size(node)

        l, r = self.children[node]
        # peak either occured while:
        # 1. we were forming left intermediate
        peakleft = self.get_peak_size(l)
        # 2. we were forming right intermediate (whilst holding left)
        peakright = self.get_size(l) + self.get_peak_size(r)
        # 3. or we were performing this contraction, including output
        peakthis = self.get_size(l) + self.get_size(r) + self.get_size(node)

        return max(peakleft, peakright, peakthis)

    def reorder_for_peak_size(self):
        """This reorders the depth first traversal of the tree to minimize
        the peak size of the contraction.
        """
        changed = False
        for p, l, r in self.traverse():
            sl = self.get_size(l)
            pl = self.get_peak_size(l)
            sr = self.get_size(r)
            pr = self.get_peak_size(r)
            # peak if hold left while we form right
            plr = max(pl, sl + pr)
            # peak if hold right while we form left
            prl = max(pr, sr + pl)
            if prl < plr:
                self.children[p] = r, l
                changed = True
        return changed

    def get_centrality(self, node):
        try:
            return self.info[node]["centrality"]
        except KeyError:
            self.compute_centralities()
            return self.info[node]["centrality"]

    def total_flops(self, dtype=None, log=None):
        """Sum the flops contribution from every node in the tree.

        Parameters
        ----------
        dtype : {'float', 'complex', None}, optional
            Scale the answer depending on the assumed data type.
        """
        if self._track_flops:
            C = self.multiplicity * self._flops

        else:
            self._flops = 0
            for node, _, _ in self.traverse():
                self._flops += self.get_flops(node)

            self._track_flops = True
            C = self.multiplicity * self._flops

        if dtype is None:
            pass
        elif "float" in dtype:
            C *= 2
        elif "complex" in dtype:
            C *= 4
        else:
            raise ValueError(f"Unknown dtype {dtype}")

        if log is not None:
            C = math.log(max(C, 1), log)

        return C

    def total_write(self):
        """Sum the total amount of memory that will be created and operated on."""
        if not self._track_write:
            self._write = 0
            for node, _, _ in self.traverse():
                self._write += self.get_size(node)

            self._track_write = True

        return self.multiplicity * self._write

    def combo_cost(self, factor=DEFAULT_COMBO_FACTOR, combine=sum, log=None):
        t = 0
        for p in self.children:
            f = self.get_flops(p)
            w = self.get_size(p)
            t += combine((f, factor * w))

        t *= self.multiplicity

        if log is not None:
            t = math.log(t, log)

        return t

    total_cost = combo_cost

    def max_size(self, log=None):
        """The size of the largest intermediate tensor."""
        if self.N == 1:
            return self.get_size(self.root)

        if not self._track_size:
            self._sizes = MaxCounter()
            for node, _, _ in self.traverse():
                self._sizes.add(self.get_size(node))
            self._track_size = True

        size = self._sizes.max()

        if log is not None:
            size = math.log(size, log)

        return size

    def max_contraction_size(self, log=None):
        """The maximum size of a single contraction in the tree. This includes
        the size of the two input tensors and the output tensor, and can be a
        more practical measure of the peak memory required.

        Parameters
        ----------
        log : float, optional
            If provided, return the log of the size to this base.

        Returns
        -------
        size : int or float
            The maximum size of a single contraction in the tree, or its log.
        """
        Y = max(
            self.get_size(p) + self.get_size(l) + self.get_size(r)
            for p, (l, r) in self.children.items()
        )

        if log is not None:
            Y = math.log(Y, log)

        return Y

    def peak_size(self, order=None, log=None):
        """Get the peak concurrent size of tensors needed - this depends on the
        traversal order, i.e. the exact contraction path, not just the
        contraction tree.
        """
        tot_size = sum(self.get_size(node) for node in self.gen_leaves())
        peak = tot_size
        for p, l, r in self.traverse(order=order):
            tot_size += self.get_size(p)
            # measure peak assuming we need both inputs and output
            peak = max(peak, tot_size)
            tot_size -= self.get_size(l)
            tot_size -= self.get_size(r)

        if log is not None:
            peak = math.log(peak, log)

        return peak

    def contract_stats(self, force=False):
        """Simulteneously compute the total flops, write and size of the
        contraction tree. This is more efficient than calling each of the
        individual methods separately. Once computed, each quantity is then
        automatically tracked.

        Returns
        -------
        stats : dict[str, int]
            The total flops, write and size.
        """
        if force or not (
            self._track_flops and self._track_write and self._track_size
        ):
            self._flops = self._write = 0
            self._sizes = MaxCounter()

            for node, _, _ in self.traverse():
                self._flops += self.get_flops(node)
                node_size = self.get_size(node)
                self._write += node_size
                self._sizes.add(node_size)

            self._track_flops = self._track_write = self._track_size = True

        return {
            "flops": max(self.multiplicity * self._flops, 1),
            "write": max(self.multiplicity * self._write, 1),
            "size": max(self._sizes.max(), 1),
        }

    def arithmetic_intensity(self):
        """The ratio of total flops to total write - the higher the better for
        extracting good computational performance.
        """
        return self.total_flops(dtype=None) / self.total_write()

    def contraction_scaling(self):
        """This is computed simply as the maximum number of indices involved
        in any single contraction, which will match the scaling assuming that
        all dimensions are equal.
        """
        return max(len(self.get_involved(node)) for node in self.info)

    def contraction_cost(self, log=None):
        """Get the total number of scalar operations ~ time complexity."""
        return self.total_flops(dtype=None, log=log)

    def naive_cost(self, log=None):
        """Get the naive cost of performing this contraction as a single
        einsum summation, without any intermediate contractions. This is given
        the as product of the size of all indices.

        Parameters
        ----------
        log : float, optional
            If provided, return log of the cost to this base.
        """
        if log is None:
            return prod(self.size_dict[ix] for ix in self.appearances)
        else:
            return sum(
                math.log(self.size_dict[ix], log) for ix in self.appearances
            )

    def speedup(self, log=None):
        """Speedup compared to naive summation.

        Parameters
        ----------
        log : float, optional
            If provided, return log of the speedup to this base.
        """
        if log is None:
            return self.naive_cost() / self.contraction_cost()
        else:
            logc = self.contraction_cost(log=log)
            logn = self.naive_cost(log=log)
            return logn - logc

    def contraction_width(self, log=2):
        """Get log2 of the size of the largest tensor."""
        return self.max_size(log=log)

    def compressed_contract_stats(
        self,
        chi=None,
        order="surface_order",
        compress_late=None,
    ):
        if chi is None:
            chi = self.get_default_chi()

        if compress_late is None:
            compress_late = self.get_default_compress_late()

        hg = self.get_hypergraph(accel="auto")

        # conversion between tree nodes <-> hypergraph nodes during contraction
        tree_map = dict(zip(self.gen_leaves(), range(hg.get_num_nodes())))

        tracker = CompressedStatsTracker(hg, chi)

        for p, l, r in self.traverse(order):
            li = tree_map[l]
            ri = tree_map[r]

            tracker.update_pre_step()

            if compress_late:
                tracker.update_pre_compress(hg, li, ri)
                # compress just before we contract tensors
                hg.compress(chi=chi, edges=hg.get_node(li))
                hg.compress(chi=chi, edges=hg.get_node(ri))
                tracker.update_post_compress(hg, li, ri)

            tracker.update_pre_contract(hg, li, ri)
            pi = tree_map[p] = hg.contract(li, ri)
            tracker.update_post_contract(hg, pi)

            if not compress_late:
                # compress as soon as we can after contracting tensors
                tracker.update_pre_compress(hg, pi)
                hg.compress(chi=chi, edges=hg.get_node(pi))
                tracker.update_post_compress(hg, pi)

            tracker.update_post_step()

        return tracker

    def total_flops_compressed(
        self,
        chi=None,
        order="surface_order",
        compress_late=None,
        dtype=None,
        log=None,
    ):
        """Estimate the total flops for a compressed contraction of this tree
        with maximum bond size ``chi``. This includes basic estimates of the
        ops to perform contractions, QRs and SVDs.
        """
        if dtype is not None:
            raise ValueError(
                "Can only estimate cost in terms of "
                "number of abstract scalar ops."
            )

        F = self.compressed_contract_stats(
            chi=chi,
            order=order,
            compress_late=compress_late,
        ).flops

        if log is not None:
            F = math.log(F, log)

        return F

    contraction_cost_compressed = total_flops_compressed

    def total_write_compressed(
        self,
        chi=None,
        order="surface_order",
        compress_late=None,
        log=None,
    ):
        """Compute the total size of all intermediate tensors when a
        compressed contraction is performed with maximum bond size ``chi``,
        ordered by ``order``. This is relevant maybe for time complexity and
        e.g. autodiff space complexity (since every intermediate is kept).
        """
        W = self.compressed_contract_stats(
            chi=chi,
            order=order,
            compress_late=compress_late,
        ).write

        if log is not None:
            W = math.log(W, log)

        return W

    def combo_cost_compressed(
        self,
        chi=None,
        order="surface_order",
        compress_late=None,
        factor=None,
        log=None,
    ):
        if factor is None:
            factor = self.get_default_combo_factor()

        C = self.total_flops_compressed(
            chi=chi, order=order, compress_late=compress_late
        ) + factor * self.total_write_compressed(
            chi=chi, order=order, compress_late=compress_late
        )

        if log is not None:
            C = math.log(C, log)

        return C

    total_cost_compressed = combo_cost_compressed

    def max_size_compressed(
        self, chi=None, order="surface_order", compress_late=None, log=None
    ):
        """Compute the maximum sized tensor produced when a compressed
        contraction is performed with maximum bond size ``chi``, ordered by
        ``order``. This is close to the ideal space complexity if only
        tensors that are being directly operated on are kept in memory.
        """
        S = self.compressed_contract_stats(
            chi=chi,
            order=order,
            compress_late=compress_late,
        ).max_size

        if log is not None:
            S = math.log(S, log)

        return S

    def peak_size_compressed(
        self,
        chi=None,
        order="surface_order",
        compress_late=None,
        accel="auto",
        log=None,
    ):
        """Compute the peak size of combined intermediate tensors when a
        compressed contraction is performed with maximum bond size ``chi``,
        ordered by ``order``. This is the practical space complexity if one is
        not swapping intermediates in and out of memory.
        """
        P = self.compressed_contract_stats(
            chi=chi,
            order=order,
            compress_late=compress_late,
        ).peak_size

        if log is not None:
            P = math.log(P, log)

        return P

    def contraction_width_compressed(
        self, chi=None, order="surface_order", compress_late=None, log=2
    ):
        """Compute log2 of the maximum sized tensor produced when a compressed
        contraction is performed with maximum bond size ``chi``, ordered by
        ``order``.
        """
        return self.max_size_compressed(chi, order, compress_late, log=log)

    def _update_tracked(self, node):
        if self._track_flops:
            self._flops += self.get_flops(node)
        if self._track_write:
            self._write += self.get_size(node)
        if self._track_size:
            self._sizes.add(self.get_size(node))

    def contract_nodes_pair(
        self,
        x,
        y,
        legs=None,
        cost=None,
        size=None,
        parent=None,
        check=False,
    ):
        """Contract node ``x`` with node ``y`` in the tree to create a new
        parent node, which is returned.

        Parameters
        ----------
        x : node_type
            The first node to contract.
        y : node_type
            The second node to contract.
        legs : dict[str, int], optional
            The effective 'legs' of the new node if already known. If not
            given, this is computed from the inputs of ``x`` and ``y``.
        cost : int, optional
            The cost of the contraction if already known. If not given, this is
            computed from the inputs of ``x`` and ``y``.
        size : int, optional
            The size of the new node if already known. If not given, this is
            computed from the inputs of ``x`` and ``y``.
        check : bool, optional
            Whether to check the inputs are valid.

        Returns
        -------
        parent : node_type
            The new parent node of ``x`` and ``y``.
        """
        self._add_node(x, check=check)
        self._add_node(y, check=check)
        nx, ny = self.get_extent(x), self.get_extent(y)

        if parent is None:
            if nx + ny == self.N:
                parent = self.root
            else:
                parent = self.nodeops.new_node_for_union([x, y])
        self._add_node(parent, check=check)

        # enforce left ordering of 'heaviest' subtrees
        if nx == ny:
            # deterministically break ties
            sortx = self.nodeops.node_tie_breaker(x)
            sorty = self.nodeops.node_tie_breaker(y)
        else:
            sortx = nx
            sorty = ny

        if sortx > sorty:
            lr = (x, y)
        else:
            lr = (y, x)

        self.children[parent] = lr

        if self.track_childless:
            self.childless.discard(parent)
            if x not in self.children and nx > 1:
                self.childless.add(x)
            if y not in self.children and ny > 1:
                self.childless.add(y)

        # pre-computed information
        if legs is not None:
            self.info[parent]["legs"] = legs
        if cost is not None:
            self.info[parent]["flops"] = cost
        if size is not None:
            self.info[parent]["size"] = size

        self._update_tracked(parent)

        return parent

    def contract_nodes(
        self,
        nodes,
        optimize="auto",
        grandparent=None,
        check=False,
        extra_opts=None,
    ):
        """Contract an arbitrary number of ``nodes`` in the tree to build up a
        subtree. The root of this subtree (a new intermediate) is returned.
        """
        # possibly convert from subgraph spec to node types
        nodes = tuple(self._add_node(node, check=check) for node in nodes)

        if len(nodes) == 1:
            return nodes[0]

        if len(nodes) == 2:
            return self.contract_nodes_pair(
                *nodes, parent=grandparent, check=check
            )

        from .interface import find_path

        # create the bottom and top nodes
        if grandparent is None:
            if sum(map(self.get_extent, nodes)) == self.N:
                # don't generate new node if root
                grandparent = self.root
            else:
                # assume we can generate new node
                grandparent = self.nodeops.new_node_for_union(nodes)

        self._add_node(grandparent, check=check)

        # if more than two nodes need to find the path to fill in between
        #         \
        #         GN             <- 'grandparent'
        #        /  \
        #      ?????????
        #    ?????????????       <- to be filled with 'temp nodes'
        #   /  \    /   / \
        #  N0  N1  N2  N3  N4    <- ``nodes``, or, subgraphs
        #  /    \  /   /    \
        legs_inputs = tuple(map(self.get_legs, nodes))
        path_inputs = tuple(map(tuple, legs_inputs))

        try:
            # output legs of the grandparent (after slicing)
            # we dont' use get_legs since we can do the shortcut below
            grand_legs = self.info[grandparent]["legs"]
        except KeyError:
            # compute legs directly from children
            if grandparent == self.root:
                # special case, need output ordering and sliced indices
                grand_legs = self.get_legs(grandparent)
            else:
                involved = legs_union(legs_inputs)
                grand_legs = {
                    ix: ix_count
                    for ix, ix_count in involved.items()
                    if ix_count < self.appearances[ix]
                }
                self.info[grandparent]["legs"] = grand_legs

        path_output = tuple(grand_legs)

        path = find_path(
            path_inputs,
            path_output,
            self.size_dict,
            optimize=optimize,
            **(extra_opts or {}),
        )

        # now we have path create the nodes in between
        temp_nodes = list(nodes)
        for p in path[:-1]:
            to_contract = [temp_nodes.pop(i) for i in sorted(p, reverse=True)]
            temp_nodes.append(self.contract_nodes(to_contract, check=check))

        # want to explicitly specify the grandparent node:
        #     so do the final pairwise contraction separately
        self.contract_nodes(temp_nodes, grandparent=grandparent, check=check)

        return grandparent

    def is_complete(self):
        """Check every node has two children, unless it is a leaf."""
        if self.N == 1:
            return True

        too_many_nodes = len(self.info) > 2 * self.N - 1
        too_many_branches = len(self.children) > self.N - 1

        if too_many_nodes or too_many_branches:
            raise ValueError("Contraction tree seems to be over complete!")

        queue = [self.root]
        while queue:
            node = queue.pop()
            if self.is_leaf(node):
                continue
            try:
                queue.extend(self.children[node])
            except KeyError:
                return False

        return True

    def get_default_order(self):
        return "dfs"

    def _traverse_dfs(self):
        """Traverse the tree in a depth first, non-recursive, order."""
        ready = set(self.gen_leaves())
        queue = [self.root]

        while queue:
            node = queue[-1]
            l, r = self.children[node]

            # both node's children are ready -> we can yield this contraction
            if (l in ready) and (r in ready):
                ready.add(queue.pop())
                yield node, l, r
                continue

            if r not in ready:
                queue.append(r)
            if l not in ready:
                queue.append(l)

    def _traverse_ordered(self, order):
        """Traverse the tree in the order that minimizes ``order(node)``, but
        still constrained to produce children before parents.
        """
        from bisect import bisect

        if order == "surface_order":
            order = self.surface_order

        seen = set()
        queue = [self.root]
        scores = [order(self.root)]

        while len(seen) != len(self.children):
            i = 0
            while i < len(queue):
                node = queue[i]
                if node not in seen:
                    for child in self.children[node]:
                        if self.get_extent(child) > 1:
                            # insert child into queue by score + before parent
                            score = order(child)
                            ci = bisect(scores[:i], score)
                            scores.insert(ci, score)
                            queue.insert(ci, child)
                            # parent moves extra place to right
                            i += 1
                    seen.add(node)
                i += 1

        for node in queue:
            yield (node, *self.children[node])

    def traverse(self, order=None):
        """Generate, in order, all the node merges in this tree. Non-recursive!
        This ensures children are always visited before their parent.

        Parameters
        ----------
        order : None, "dfs", or callable, optional
            How to order the contractions within the tree. If a callable is
            given (which should take a node as its argument), try to contract
            nodes that minimize this function first.

        Returns
        -------
        generator[tuple[node]]
            The bottom up ordered sequence of tree merges, each a
            tuple of ``(parent, left_child, right_child)``.

        See Also
        --------
        descend
        """
        if self.N == 1:
            return

        if order is None:
            order = self.get_default_order()

        if order == "dfs":
            yield from self._traverse_dfs()
        else:
            yield from self._traverse_ordered(order=order)

    def descend(self, mode="dfs"):
        """Generate, from root to leaves, all the node merges in this tree.
        Non-recursive! This ensures parents are visited before their children.

        Parameters
        ----------
        mode : {'dfs', bfs}, optional
            How expand from a parent.

        Returns
        -------
        generator[tuple[node]
            The top down ordered sequence of tree merges, each a
            tuple of ``(parent, left_child, right_child)``.

        See Also
        --------
        traverse
        """
        queue = [self.root]
        while queue:
            if mode == "dfs":
                parent = queue.pop(-1)
            elif mode == "bfs":
                parent = queue.pop(0)
            l, r = self.children[parent]
            yield parent, l, r
            if self.get_extent(l) > 1:
                queue.append(l)
            if self.get_extent(r) > 1:
                queue.append(r)

    def get_subtree(self, node, size, search="bfs", seed=None):
        """Get a subtree spanning down from ``node`` which will have ``size``
        leaves (themselves not necessarily leaves of the actual tree).

        Parameters
        ----------
        node : node_type
            The node of the tree to start with.
        size : int
            How many subtree leaves to aim for.
        search : {'bfs', 'dfs', 'random'}, optional
            How to build the tree:

                - 'bfs': breadth first expansion
                - 'dfs': depth first expansion (largest nodes first)
                - 'random': random expansion

        seed : None, int or random.Random, optional
            Random number generator seed, if ``search`` is 'random'.

        Returns
        -------
        sub_leaves : tuple[node_type]
            Nodes which are subtree leaves.
        branches : tuple[node_type]
            Nodes which are between the subtree leaves and root.
        """
        # nodes which are subtree leaves
        branches = []

        # actual tree leaves - can't expand
        real_leaves = []

        # nodes to expand
        queue = [node]

        if search == "random":
            rng = get_rng(seed)
        else:
            rng = None
            if search == "bfs":
                i = 0
            elif search == "dfs":
                i = -1

        while (len(queue) + len(real_leaves) < size) and queue:
            if rng is not None:
                i = rng.randint(0, len(queue) - 1)

            p = queue.pop(i)
            if self.is_leaf(p):
                real_leaves.append(p)
                continue

            # the left child is always >= in weight that right child
            #     if we append it last then ``.pop(-1)`` above perform the
            #     depth first search sorting by node subgraph size
            l, r = self.children[p]

            queue.append(r)
            queue.append(l)
            branches.append(p)

        # nodes at the bottom of the subtree
        sub_leaves = queue + real_leaves

        return tuple(sub_leaves), tuple(branches)

    def remove_ind(self, ind, project=None, inplace=False):
        """Remove (i.e. by default slice) index ``ind`` from this contraction
        tree, taking care to update all relevant information about each node.
        """
        tree = self if inplace else self.copy()

        if ind in tree.sliced_inds:
            raise ValueError(f"Index {ind} already sliced.")

        # make sure all flops and size information has been populated
        tree.contract_stats()

        d = tree.size_dict[ind]
        if project is None:
            # we are slicing the index
            si = SliceInfo(ind not in tree.output, ind, d, None)
            tree.multiplicity = tree.multiplicity * d
        else:
            si = SliceInfo(ind not in tree.output, ind, 1, project)

        # update the ordered slice information dictionary, but maintain the
        # order such that output sliced indices always appear first ->
        # enforced by the dataclass SliceInfo ordering
        tree.sliced_inds = {
            si.ind: si for si in sorted((*tree.sliced_inds.values(), si))
        }

        for node, node_info in tree.info.items():
            if self.is_leaf(node):
                # handle leaves separately
                i = self.nodeops.node_get_single_el(node)
                term = tree.inputs[i]
                if ind in term:
                    # n.b. leaves don't contribute to size, flops or write
                    # simply recalculate all information, incl. preprocessing
                    tree._remove_node(node)
                    tree.sliced_inputs = tree.sliced_inputs | frozenset([i])
            else:
                involved = tree.get_involved(node)
                if ind not in involved:
                    # if ind doesn't feature in this node (contraction)
                    # -> nothing to do
                    continue

                # else update all the relevant information about this node
                # -> flops changes for all involved indices
                node_info["involved"] = legs_without(involved, ind)
                old_flops = tree.get_flops(node)
                new_flops = old_flops // d
                node_info["flops"] = new_flops
                tree._flops += new_flops - old_flops

                # -> size and write only changes for node legs (output) indices
                legs = tree.get_legs(node)
                if ind in legs:
                    node_info["legs"] = legs_without(legs, ind)
                    old_size = tree.get_size(node)
                    tree._sizes.discard(old_size)
                    new_size = old_size // d
                    tree._sizes.add(new_size)
                    node_info["size"] = new_size
                    tree._write += new_size - old_size

                # delete info we can't change
                for k in (
                    "inds",
                    "einsum_eq",
                    "can_dot",
                    "tensordot_axes",
                    "tensordot_perm",
                ):
                    tree.info[node].pop(k, None)

        tree.already_optimized.clear()
        tree.contraction_cores.clear()

        return tree

    remove_ind_ = functools.partialmethod(remove_ind, inplace=True)

    def restore_ind(self, ind, inplace=False):
        """Restore (unslice or un-project) index ``ind`` to this contraction
        tree, taking care to update all relevant information about each node.

        Parameters
        ----------
        ind : str
            The index to restore.
        inplace : bool, optional
            Whether to perform the restoration inplace or not.

        Returns
        -------
        ContractionTree
        """
        tree = self if inplace else self.copy()

        # pop sliced index info
        si = tree.sliced_inds.pop(ind)

        # make sure all flops and size information has been populated
        tree.contract_stats()
        tree.multiplicity //= si.size

        # handle inputs
        for i, term in enumerate(tree.inputs):
            # this is the original term with all indices
            if ind in term:
                tree._remove_node(self.input_to_node(i))
                if all(ix not in tree.sliced_inds for ix in term):
                    # mark this input as not sliced
                    tree.sliced_inputs = tree.sliced_inputs - frozenset([i])

        # delete and re-add dependent intermediates
        for p, l, r in tree.traverse():
            if ind in tree.get_legs(l) or ind in tree.get_legs(r):
                tree._remove_node(p)
                tree.contract_nodes_pair(l, r, parent=p)

        # reset caches
        tree.already_optimized.clear()
        tree.contraction_cores.clear()

        return tree

    restore_ind_ = functools.partialmethod(restore_ind, inplace=True)

    def unslice_rand(self, seed=None, inplace=False):
        """Unslice (restore) a random index from this contraction tree.

        Parameters
        ----------
        seed : None, int or random.Random, optional
            Random number generator seed.
        inplace : bool, optional
            Whether to perform the unslicing inplace or not.

        Returns
        -------
        ContractionTree
        """
        rng = get_rng(seed)
        ix = rng.choice(tuple(self.sliced_inds))
        return self.restore_ind(ix, inplace=inplace)

    unslice_rand_ = functools.partialmethod(unslice_rand, inplace=True)

    def unslice_all(self, inplace=False):
        """Unslice (restore) all sliced indices from this contraction tree.

        Parameters
        ----------
        inplace : bool, optional
            Whether to perform the unslicing inplace or not.

        Returns
        -------
        ContractionTree
        """
        tree = self if inplace else self.copy()

        for ind in tuple(tree.sliced_inds):
            tree.restore_ind_(ind)

        return tree

    unslice_all_ = functools.partialmethod(unslice_all, inplace=True)

    def calc_subtree_candidates(self, pwr=2, what="flops"):
        # get all intermediate nodes
        candidates = list(self.children)

        if what == "size":
            weights = [self.get_size(x) for x in candidates]

        elif what == "flops":
            weights = [self.get_flops(x) for x in candidates]

        if pwr == "log":
            weights = [math.log2(max(2, w)) for w in weights]
        else:
            max_weight = max(weights)
            # can be bigger than numpy int/float allows
            weights = [float(w / max_weight) ** (1 / pwr) for w in weights]

        # sort by descending score
        candidates, weights = zip(
            *sorted(zip(candidates, weights), key=lambda x: -x[1])
        )

        return list(candidates), list(weights)

    def _subtree_remove_and_optimize(
        self,
        sub_root,
        sub_leaves,
        sub_branches,
        already_optimized,
        node_cost,
        minimize,
        opt,
        pbar,
    ):
        current_cost = node_cost(self, sub_root)
        for node in sub_branches:
            # these are the intermediates *between* leaves and sub-root
            if minimize == "size":
                current_cost = max(current_cost, node_cost(self, node))
            else:
                current_cost += node_cost(self, node)
            self._remove_node(node)

        # make the optimizer more efficient by supplying accurate cap
        opt.cost_cap = max(2, current_cost)

        # and reoptimize the leaves
        self.contract_nodes(sub_leaves, optimize=opt, grandparent=sub_root)
        already_optimized.add(sub_leaves)

        if pbar is not None:
            pbar.update()
            pbar.set_description(_describe_tree(self), refresh=False)

    def _subtree_reconfigure_descend(
        self,
        subtree_size,
        subtree_search,
        maxiter,
        seed,
        minimize,
        opt,
        already_optimized,
        node_cost,
        pbar,
    ):
        candidates = [self.root]
        any_modified = False

        def _possibly_add_children(sub_root, any_modified):
            if self.get_extent(sub_root) > subtree_size:
                # possibly extend with node children, if not close to bottom
                lnode, rnode = self.children[sub_root]
                if self.get_extent(lnode) >= 2:
                    candidates.append(lnode)
                if self.get_extent(rnode) >= 2:
                    candidates.append(rnode)

            if len(candidates) == 0:
                # exhausted queue
                if any_modified:
                    # but have made *any* changes -> go again from top
                    candidates.append(self.root)
                    any_modified = False

            return any_modified

        r = 0
        while candidates and r < maxiter:
            sub_root = candidates.pop(0)

            # get a subtree to possibly reconfigure
            sub_leaves, sub_branches = self.get_subtree(
                sub_root, size=subtree_size, search=subtree_search, seed=seed
            )

            # check if its already been optimized
            sub_leaves = frozenset(sub_leaves)
            if sub_leaves in already_optimized:
                any_modified = _possibly_add_children(sub_root, any_modified)
                continue

            # else remove the branches, keeping track of current cost
            self._subtree_remove_and_optimize(
                sub_root,
                sub_leaves,
                sub_branches,
                already_optimized,
                node_cost,
                minimize,
                opt,
                pbar,
            )
            any_modified = _possibly_add_children(sub_root, True)
            r += 1

    def _subtree_reconfigure_rand_select(
        self,
        subtree_size,
        subtree_search,
        weight_what,
        weight_pwr,
        select,
        maxiter,
        seed,
        minimize,
        opt,
        already_optimized,
        node_cost,
        pbar,
    ):
        if select == "random":
            rng = get_rng(seed)
        else:
            rng = None
            if select == "max":
                i = 0
            elif select == "min":
                i = -1

        candidates, weights = self.calc_subtree_candidates(
            pwr=weight_pwr, what=weight_what
        )

        r = 0
        while candidates and r < maxiter:
            if rng is not None:
                (i,) = rng.choices(range(len(candidates)), weights=weights)

            weights.pop(i)
            sub_root = candidates.pop(i)

            # get a subtree to possibly reconfigure
            sub_leaves, sub_branches = self.get_subtree(
                sub_root, size=subtree_size, search=subtree_search, seed=seed
            )

            # check if its already been optimized
            sub_leaves = frozenset(sub_leaves)
            if sub_leaves in already_optimized:
                continue

            # else remove the branches, keeping track of current cost
            self._subtree_remove_and_optimize(
                sub_root,
                sub_leaves,
                sub_branches,
                already_optimized,
                node_cost,
                minimize,
                opt,
                pbar,
            )

            # if we have reconfigured simply re-add all candidates
            candidates, weights = self.calc_subtree_candidates(
                pwr=weight_pwr, what=weight_what
            )

            r += 1

    def subtree_reconfigure(
        self,
        subtree_size=8,
        subtree_search="bfs",
        weight_what="flops",
        weight_pwr=2,
        select="max",
        maxiter=500,
        seed=None,
        minimize=None,
        optimize=None,
        inplace=False,
        progbar=False,
    ):
        """Reconfigure subtrees of this tree with locally optimal paths.

        Parameters
        ----------
        subtree_size : int, optional
            The size of subtree to consider. Cost is exponential in this.
        subtree_search : {'bfs', 'dfs', 'random'}, optional
            How to build the subtrees:

            - 'bfs': breadth-first-search creating balanced subtrees
            - 'dfs': depth-first-search creating imbalanced subtrees
            - 'random': random subtree building

        weight_what : {'flops', 'size'}, optional
            When assessing nodes to build and optimize subtrees from whether to
            score them by the (local) contraction cost, or tensor size.
        weight_pwr : int, optional
            When assessing nodes to build and optimize subtrees from, how to
            scale their score into a probability: ``score**(1 / weight_pwr)``.
            The larger this is the more explorative the algorithm is when
            ``select='random'``.
        select : {'descend', 'max', 'min', 'random'}, optional
            What order to select node subtrees to optimize:

            - 'descend': start from the root and then descend into children. In
              this case the weights and weight_pwr are ignored since this is a
              deterministic order.
            - 'max': choose the highest score first
            - 'min': choose the lowest score first
            - 'random': choose randomly weighted on score - see ``weight_pwr``.

        maxiter : int, optional
            How many subtree optimizations to perform, the algorithm can
            terminate before this if all subtrees have been optimized.
        seed : int, optional
            A random seed (seeds python system random module).
        minimize : {'flops', 'size'}, optional
            Whether to minimize with respect to contraction flops or size.
        inplace : bool, optional
            Whether to perform the reconfiguration inplace or not.
        progbar : bool, optional
            Whether to show live progress of the reconfiguration.

        Returns
        -------
        ContractionTree
        """
        tree = self if inplace else self.copy()
        tree.reset_contraction_indices()

        # ensure these have been computed and thus are being tracked
        tree.contract_stats()

        if minimize is None:
            minimize = self.get_default_objective()
        scorer = get_score_fn(minimize)
        node_cost = getattr(scorer, "cost_local_tree_node", lambda _: 2)

        if optimize is None:
            from .pathfinders.path_basic import OptimalOptimizer

            minimize = scorer.get_dynamic_programming_minimize()
            opt = OptimalOptimizer(minimize=minimize)
        else:
            opt = optimize

        # different caches as we might want to reconfigure one before other
        tree.already_optimized.setdefault(minimize, set())
        already_optimized = tree.already_optimized[minimize]

        if progbar:
            import tqdm

            pbar = tqdm.tqdm()
            pbar.set_description(_describe_tree(tree), refresh=False)
        else:
            pbar = None

        try:
            reconf_kwargs = {
                "subtree_size": subtree_size,
                "subtree_search": subtree_search,
                "maxiter": maxiter,
                "seed": seed,
                "minimize": minimize,
                "opt": opt,
                "already_optimized": already_optimized,
                "node_cost": node_cost,
                "pbar": pbar,
            }

            if select == "descend":
                tree._subtree_reconfigure_descend(**reconf_kwargs)
            else:
                reconf_kwargs["weight_what"] = weight_what
                reconf_kwargs["weight_pwr"] = weight_pwr
                reconf_kwargs["select"] = select
                tree._subtree_reconfigure_rand_select(**reconf_kwargs)

        finally:
            if progbar:
                pbar.close()

        return tree

    subtree_reconfigure_ = functools.partialmethod(
        subtree_reconfigure, inplace=True
    )

    def subtree_reconfigure_forest(
        self,
        num_trees=8,
        num_restarts=10,
        restart_fraction=0.5,
        subtree_maxiter=100,
        subtree_size=10,
        subtree_search=("random", "bfs"),
        subtree_select=("random",),
        subtree_weight_what=("flops", "size"),
        subtree_weight_pwr=(2,),
        parallel="auto",
        parallel_maxiter_steps=4,
        minimize=None,
        seed=None,
        progbar=False,
        inplace=False,
    ):
        """'Forested' version of ``subtree_reconfigure`` which is more
        explorative and can be parallelized. It stochastically generates
        a 'forest' reconfigured trees, then only keeps some fraction of these
        to generate the next forest.

        Parameters
        ----------
        num_trees : int, optional
            The number of trees to reconfigure at each stage.
        num_restarts : int, optional
            The number of times to halt, prune and then restart the
            tree reconfigurations.
        restart_fraction : float, optional
            The fraction of trees to keep at each stage and generate the next
            forest from.
        subtree_maxiter : int, optional
            Number of subtree reconfigurations per step.
            ``num_restarts * subtree_maxiter`` is the max number of total
            subtree reconfigurations for the final tree produced.
        subtree_size : int, optional
            The size of subtrees to search for and reconfigure.
        subtree_search : tuple[{'random', 'bfs', 'dfs'}], optional
            Tuple of options for the ``search`` kwarg of
            :meth:`ContractionTree.subtree_reconfigure` to randomly sample.
        subtree_select : tuple[{'random', 'max', 'min'}], optional
            Tuple of options for the ``select`` kwarg of
            :meth:`ContractionTree.subtree_reconfigure` to randomly sample.
        subtree_weight_what : tuple[{'flops', 'size'}], optional
            Tuple of options for the ``weight_what`` kwarg of
            :meth:`ContractionTree.subtree_reconfigure` to randomly sample.
        subtree_weight_pwr : tuple[int], optional
            Tuple of options for the ``weight_pwr`` kwarg of
            :meth:`ContractionTree.subtree_reconfigure` to randomly sample.
        parallel : 'auto', False, True, int, or distributed.Client
            Whether to parallelize the search.
        parallel_maxiter_steps : int, optional
            If parallelizing, how many steps to break each reconfiguration into
            in order to evenly saturate many processes.
        minimize : {'flops', 'size', ..., Objective}, optional
            Whether to minimize the total flops or maximum size of the
            contraction tree.
        seed : None, int or random.Random, optional
            A random seed to use.
        progbar : bool, optional
            Whether to show live progress.
        inplace : bool, optional
            Whether to perform the subtree reconfiguration inplace.

        Returns
        -------
        ContractionTree
        """
        tree = self if inplace else self.copy()
        tree.reset_contraction_indices()

        # candidate trees
        num_keep = max(1, int(num_trees * restart_fraction))

        # how to rank the trees
        if minimize is None:
            minimize = self.get_default_objective()
        score = get_score_fn(minimize)

        rng = get_rng(seed)

        # set up the initial 'forest' and parallel machinery
        pool = parse_parallel_arg(parallel)
        is_scatter_pool = can_scatter(pool)
        if is_scatter_pool:
            is_worker = maybe_leave_pool(pool)
            # store the trees as futures for the entire process
            forest = [scatter(pool, tree)]
            maxiter = subtree_maxiter // parallel_maxiter_steps
        else:
            forest = [tree]
            maxiter = subtree_maxiter

        if progbar:
            import tqdm

            pbar = tqdm.tqdm(total=num_restarts)
            pbar.set_description(_describe_tree(tree), refresh=False)

        try:
            for _ in range(num_restarts):
                # on the next round take only the best trees
                forest = itertools.cycle(forest[:num_keep])

                # select some random configurations
                saplings = [
                    {
                        "tree": next(forest),
                        "maxiter": maxiter,
                        "minimize": minimize,
                        "subtree_size": subtree_size,
                        "subtree_search": rng.choice(subtree_search),
                        "select": rng.choice(subtree_select),
                        "weight_pwr": rng.choice(subtree_weight_pwr),
                        "weight_what": rng.choice(subtree_weight_what),
                    }
                    for _ in range(num_trees)
                ]

                if pool is None:
                    forest = [_reconfigure_tree(**s) for s in saplings]
                    res = [{"tree": t, **_get_tree_info(t)} for t in forest]
                elif not is_scatter_pool:
                    forest_futures = [
                        submit(pool, _reconfigure_tree, **s) for s in saplings
                    ]
                    forest = [f.result() for f in forest_futures]
                    res = [{"tree": t, **_get_tree_info(t)} for t in forest]
                else:
                    # submit in smaller steps to saturate processes
                    for _ in range(parallel_maxiter_steps):
                        for s in saplings:
                            s["tree"] = submit(pool, _reconfigure_tree, **s)

                    # compute scores remotely then gather
                    forest_futures = [s["tree"] for s in saplings]
                    res_futures = [
                        submit(pool, _get_tree_info, t) for t in forest_futures
                    ]
                    res = [
                        {"tree": tree_future, **res_future.result()}
                        for tree_future, res_future in zip(
                            forest_futures, res_futures
                        )
                    ]

                # update the order of the new forest
                res.sort(key=score)
                forest = [r["tree"] for r in res]

                if progbar:
                    pbar.update()
                    if pool is None:
                        d = _describe_tree(forest[0])
                    else:
                        d = submit(pool, _describe_tree, forest[0]).result()
                    pbar.set_description(d, refresh=False)

        finally:
            if progbar:
                pbar.close()

        if is_scatter_pool:
            tree.set_state_from(forest[0].result())
            maybe_rejoin_pool(is_worker, pool)
        else:
            tree.set_state_from(forest[0])

        return tree

    subtree_reconfigure_forest_ = functools.partialmethod(
        subtree_reconfigure_forest, inplace=True
    )

    simulated_anneal = simulated_anneal_tree
    simulated_anneal_ = functools.partialmethod(simulated_anneal, inplace=True)
    parallel_temper = parallel_temper_tree
    parallel_temper_ = functools.partialmethod(parallel_temper, inplace=True)

    def slice(
        self,
        target_size=None,
        target_overhead=None,
        target_slices=None,
        temperature=0.01,
        minimize=None,
        allow_outer=True,
        max_repeats=16,
        reslice=False,
        seed=None,
        inplace=False,
    ):
        """Slice this tree (turn some indices into indices which are explicitly
        summed over rather than being part of contractions). The indices are
        stored in ``tree.sliced_inds``, and the contraction width updated to
        take account of the slicing. Calling ``tree.contract(arrays)`` moreover
        which automatically perform the slicing and summation.

        Parameters
        ----------
        target_size : int, optional
            The target number of entries in the largest tensor of the sliced
            contraction. The search algorithm will terminate after this is
            reached.
        target_slices : int, optional
            The target or minimum number of 'slices' to consider - individual
            contractions after slicing indices. The search algorithm will
            terminate after this is breached. This is on top of the current
            number of slices.
        target_overhead : float, optional
            The target increase in total number of floating point operations.
            For example, a value of ``2.0`` will terminate the search just
            before the cost of computing all the slices individually breaches
            twice that of computing the original contraction all at once.
        temperature : float, optional
            How much to randomize the repeated search.
        minimize : {'flops', 'size', ..., Objective}, optional
            Which metric to score the overhead increase against.
        allow_outer : bool, optional
            Whether to allow slicing of outer indices.
        max_repeats : int, optional
            How many times to repeat the search with a slight randomization.
        reslice : bool, optional
            Whether to reslice the tree, i.e. first remove all currently
            sliced indices and start the search again. Generally any 'good'
            sliced indices will be easily found again.
        seed : None, int or random.Random, optional
            A random seed or generator to use for the search.
        inplace : bool, optional
            Whether the remove the indices from this tree inplace or not.

        Returns
        -------
        ContractionTree

        See Also
        --------
        SliceFinder, ContractionTree.slice_and_reconfigure
        """
        from .slicer import SliceFinder

        if minimize is None:
            minimize = self.get_default_objective()

        tree = self if inplace else self.copy()

        if reslice:
            if target_slices is not None:
                target_slices *= tree.nslices
            tree.unslice_all_()

        sf = SliceFinder(
            tree,
            target_size=target_size,
            target_overhead=target_overhead,
            target_slices=target_slices,
            temperature=temperature,
            minimize=minimize,
            allow_outer=allow_outer,
            seed=seed,
        )

        ix_sl, _ = sf.search(max_repeats)
        for ix in ix_sl:
            tree.remove_ind_(ix)

        return tree

    slice_ = functools.partialmethod(slice, inplace=True)

    def slice_and_reconfigure(
        self,
        target_size,
        step_size=2,
        temperature=0.01,
        minimize=None,
        allow_outer=True,
        max_repeats=16,
        reslice=False,
        reconf_opts=None,
        progbar=False,
        inplace=False,
    ):
        """Interleave slicing (removing indices into an exterior sum) with
        subtree reconfiguration to minimize the overhead induced by this
        slicing.

        Parameters
        ----------
        target_size : int
            Slice the tree until the maximum intermediate size is this or
            smaller.
        step_size : int, optional
            The minimum size reduction to try and achieve before switching to a
            round of subtree reconfiguration.
        temperature : float, optional
            The temperature to supply to ``SliceFinder`` for searching for
            indices.
        minimize : {'flops', 'size', ..., Objective}, optional
            The metric to minimize when slicing and reconfiguring subtrees.
        max_repeats : int, optional
            The number of slicing attempts to perform per search.
        progbar : bool, optional
            Whether to show live progress.
        inplace : bool, optional
            Whether to perform the slicing and reconfiguration inplace.
        reconf_opts : None or dict, optional
            Supplied to
            :meth:`ContractionTree.subtree_reconfigure` or
            :meth:`ContractionTree.subtree_reconfigure_forest`, depending on
            `'forested'` key value.
        """
        tree = self if inplace else self.copy()

        reconf_opts = {} if reconf_opts is None else dict(reconf_opts)

        if minimize is None:
            minimize = self.get_default_objective()
        minimize = get_score_fn(minimize)

        reconf_opts.setdefault("minimize", minimize)
        forested_reconf = reconf_opts.pop("forested", False)

        if progbar:
            import tqdm

            pbar = tqdm.tqdm()
            pbar.set_description(_describe_tree(tree), refresh=False)

        try:
            while tree.max_size() > target_size:
                tree.slice_(
                    temperature=temperature,
                    target_slices=step_size,
                    minimize=minimize,
                    allow_outer=allow_outer,
                    max_repeats=max_repeats,
                    reslice=reslice,
                )
                if forested_reconf:
                    tree.subtree_reconfigure_forest_(**reconf_opts)
                else:
                    tree.subtree_reconfigure_(**reconf_opts)

                if progbar:
                    pbar.update()
                    pbar.set_description(_describe_tree(tree), refresh=False)
        finally:
            if progbar:
                pbar.close()

        return tree

    slice_and_reconfigure_ = functools.partialmethod(
        slice_and_reconfigure, inplace=True
    )

    def slice_and_reconfigure_forest(
        self,
        target_size,
        step_size=2,
        num_trees=8,
        restart_fraction=0.5,
        temperature=0.02,
        max_repeats=32,
        reslice=False,
        minimize=None,
        allow_outer=True,
        parallel="auto",
        progbar=False,
        inplace=False,
        reconf_opts=None,
    ):
        """'Forested' version of :meth:`ContractionTree.slice_and_reconfigure`.
        This maintains a 'forest' of trees with different slicing and subtree
        reconfiguration attempts, pruning the worst at each step and generating
        a new forest from the best.

        Parameters
        ----------
        target_size : int
            Slice the tree until the maximum intermediate size is this or
            smaller.
        step_size : int, optional
            The minimum size reduction to try and achieve before switching to a
            round of subtree reconfiguration.
        num_restarts : int, optional
            The number of times to halt, prune and then restart the
            tree reconfigurations.
        restart_fraction : float, optional
            The fraction of trees to keep at each stage and generate the next
            forest from.
        temperature : float, optional
            The temperature at which to randomize the sliced index search.
        max_repeats : int, optional
            The number of slicing attempts to perform per search.
        parallel : 'auto', False, True, int, or distributed.Client
            Whether to parallelize the search.
        progbar : bool, optional
            Whether to show live progress.
        inplace : bool, optional
            Whether to perform the slicing and reconfiguration inplace.
        reconf_opts : None or dict, optional
            Supplied to
            :meth:`ContractionTree.slice_and_reconfigure`.

        Returns
        -------
        ContractionTree
        """
        tree = self if inplace else self.copy()
        tree.reset_contraction_indices()

        # candidate trees
        num_keep = max(1, int(num_trees * restart_fraction))

        # how to rank the trees
        if minimize is None:
            minimize = self.get_default_objective()
        score = get_score_fn(minimize)

        # set up the initial 'forest' and parallel machinery
        pool = parse_parallel_arg(parallel)
        is_scatter_pool = can_scatter(pool)
        if is_scatter_pool:
            is_worker = maybe_leave_pool(pool)
            # store the trees as futures for the entire process
            forest = [scatter(pool, tree)]
        else:
            forest = [tree]

        if progbar:
            import tqdm

            pbar = tqdm.tqdm()
            pbar.set_description(_describe_tree(tree), refresh=False)

        next_size = tree.max_size()

        try:
            while True:
                next_size //= step_size

                # on the next round take only the best trees
                forest = itertools.cycle(forest[:num_keep])

                saplings = [
                    {
                        "tree": next(forest),
                        "target_size": next_size,
                        "step_size": step_size,
                        "temperature": temperature,
                        "max_repeats": max_repeats,
                        "reconf_opts": reconf_opts,
                        "allow_outer": allow_outer,
                        "reslice": reslice,
                    }
                    for _ in range(num_trees)
                ]

                if pool is None:
                    forest = [
                        _slice_and_reconfigure_tree(**s) for s in saplings
                    ]
                    res = [{"tree": t, **_get_tree_info(t)} for t in forest]

                elif not is_scatter_pool:
                    # simple pool with no pass by reference
                    forest_futures = [
                        submit(pool, _slice_and_reconfigure_tree, **s)
                        for s in saplings
                    ]
                    forest = [f.result() for f in forest_futures]
                    res = [{"tree": t, **_get_tree_info(t)} for t in forest]

                else:
                    forest_futures = [
                        submit(pool, _slice_and_reconfigure_tree, **s)
                        for s in saplings
                    ]

                    # compute scores remotely then gather
                    res_futures = [
                        submit(pool, _get_tree_info, t) for t in forest_futures
                    ]
                    res = [
                        {"tree": tree_future, **res_future.result()}
                        for tree_future, res_future in zip(
                            forest_futures, res_futures
                        )
                    ]

                # we want to sort by flops, but also favour sampling as
                # many different sliced index combos as possible
                #    ~ [1, 1, 1, 2, 2, 3] -> [1, 2, 3, 1, 2, 1]
                res.sort(key=score)
                res = list(
                    interleave(
                        groupby(lambda r: r["sliced_ind_set"], res).values()
                    )
                )

                # update the order of the new forest
                forest = [r["tree"] for r in res]

                if progbar:
                    pbar.update()
                    if pool is None:
                        d = _describe_tree(forest[0])
                    else:
                        d = submit(pool, _describe_tree, forest[0]).result()
                    pbar.set_description(d, refresh=False)

                if res[0]["size"] <= target_size:
                    break

        finally:
            if progbar:
                pbar.close()

        if is_scatter_pool:
            tree.set_state_from(forest[0].result())
            maybe_rejoin_pool(is_worker, pool)
        else:
            tree.set_state_from(forest[0])

        return tree

    slice_and_reconfigure_forest_ = functools.partialmethod(
        slice_and_reconfigure_forest, inplace=True
    )

    def compressed_reconfigure(
        self,
        minimize=None,
        order_only=False,
        max_nodes="auto",
        max_time=None,
        local_score=None,
        exploration_power=0,
        best_score=None,
        progbar=False,
        inplace=False,
    ):
        """Reconfigure this tree according to ``peak_size_compressed``.

        Parameters
        ----------
        chi : int
            The maximum bond dimension to consider.
        order_only : bool, optional
            Whether to only consider the ordering of the current tree
            contractions, or all possible contractions, starting with the
            current.
        max_nodes : int, optional
            Set the maximum number of contraction steps to consider.
        max_time : float, optional
            Set the maximum time to spend on the search.
        local_score : callable, optional
            A function that assigns a score to a potential contraction, with a
            lower score giving more priority to explore that contraction
            earlier. It should have signature::

                local_score(step, new_score, dsize, new_size)

            where ``step`` is the number of steps so far, ``new_score`` is the
            score of the contraction so far, ``dsize`` is the change in memory
            by the current step, and ``new_size`` is the new memory size after
            contraction.
        exploration_power : float, optional
            If not ``0.0``, the inverse power to which the step is raised in
            the default local score function. Higher values favor exploring
            more promising branches early on - at the cost of increased memory.
            Ignored if ``local_score`` is supplied.
        best_score : float, optional
            Manually specify an upper bound for best score found so far.
        progbar : bool, optional
            If ``True``, display a progress bar.
        inplace : bool, optional
            Whether to perform the reconfiguration inplace on this tree.

        Returns
        -------
        ContractionTree
        """
        from .experimental.path_compressed_branchbound import (
            CompressedExhaustive,
        )

        if minimize is None:
            minimize = self.get_default_objective()

        if max_nodes == "auto":
            if max_time is None:
                max_nodes = max(10_000, self.N**2)
            else:
                max_nodes = float("inf")

        opt = CompressedExhaustive(
            minimize=minimize,
            local_score=local_score,
            max_nodes=max_nodes,
            max_time=max_time,
            exploration_power=exploration_power,
            best_score=best_score,
            progbar=progbar,
        )
        opt.setup(self.inputs, self.output, self.size_dict)
        opt.explore_path(self.get_path_surface(), restrict=order_only)

        # rtree = opt.search(self.inputs, self.output, self.size_dict)

        opt.run(self.inputs, self.output, self.size_dict)
        ssa_path = opt.ssa_path
        # ssa_path = opt(self.inputs, self.output, self.size_dict)
        rtree = self.__class__.from_path(
            self.inputs,
            self.output,
            self.size_dict,
            ssa_path=ssa_path,
            objective=minimize,
        )
        if inplace:
            self.set_state_from(rtree)
            rtree = self

        rtree.reset_contraction_indices()
        return rtree

    compressed_reconfigure_ = functools.partialmethod(
        compressed_reconfigure, inplace=True
    )

    def windowed_reconfigure(
        self,
        minimize=None,
        order_only=False,
        window_size=20,
        max_iterations=100,
        max_window_tries=1000,
        score_temperature=0.0,
        queue_temperature=1.0,
        scorer=None,
        queue_scorer=None,
        seed=None,
        inplace=False,
        progbar=False,
        **kwargs,
    ):
        from .pathfinders.path_compressed import WindowedOptimizer

        if minimize is None:
            minimize = self.get_default_objective()

        wo = WindowedOptimizer(
            self.inputs,
            self.output,
            self.size_dict,
            minimize=minimize,
            ssa_path=self.get_ssa_path(),
            seed=seed,
        )

        wo.refine(
            window_size=window_size,
            max_iterations=max_iterations,
            order_only=order_only,
            max_window_tries=max_window_tries,
            score_temperature=score_temperature,
            queue_temperature=queue_temperature,
            scorer=scorer,
            queue_scorer=queue_scorer,
            progbar=progbar,
            **kwargs,
        )
        ssa_path = wo.get_ssa_path()

        rtree = self.__class__.from_path(
            self.inputs,
            self.output,
            self.size_dict,
            ssa_path=ssa_path,
            objective=minimize,
        )

        if inplace:
            self.set_state_from(rtree)
            rtree = self

        rtree.reset_contraction_indices()
        return rtree

    windowed_reconfigure_ = functools.partialmethod(
        windowed_reconfigure, inplace=True
    )

    def flat_tree(self, order=None):
        """Create a nested tuple representation of the contraction tree like::

            ((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9))))

        Such that the contraction will progress like::

            ((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9))))
            ((0, 12), (34, ((5, 67), 89)))
            (012, (34, (567, 89)))
            (012, (34, 56789))
            (012, 3456789)
            0123456789

        Where each integer represents a leaf (i.e. single element node).
        """
        tups = dict(zip(self.gen_leaves(), range(self.N)))

        for parent, l, r in self.traverse(order=order):
            tups[parent] = tups[l], tups[r]

        return tups[self.root]

    def get_leaves_ordered(self):
        """Return the list of leaves as ordered by the contraction tree.

        Returns
        -------
        tuple[node_type]
        """
        if not self.is_complete():
            raise ValueError("Can't order the leaves until tree is complete.")

        return tuple(
            node
            for node in itertools.chain.from_iterable(self.traverse())
            if self.is_leaf(node)
        )

    def get_path(self, order=None):
        """Generate a standard path (with linear recycled ids) from the
        contraction tree.

        Parameters
        ----------
        order : None, "dfs", or callable, optional
            How to order the contractions within the tree. If a callable is
            given (which should take a node as its argument), try to contract
            nodes that minimize this function first.

        Returns
        -------
        path: tuple[tuple[int, int]]
        """
        from bisect import bisect_left

        ssa = self.N
        ssas = list(range(ssa))
        node_to_ssa = dict(zip(self.gen_leaves(), ssas))
        path = []

        for parent, left, right in self.traverse(order=order):
            # map nodes to ssas
            lssa = node_to_ssa[left]
            rssa = node_to_ssa[right]
            # map ssas to linear indices, using bisection
            i, j = sorted((bisect_left(ssas, lssa), bisect_left(ssas, rssa)))
            # 'contract' nodes
            ssas.pop(j)
            ssas.pop(i)
            path.append((i, j))
            ssas.append(ssa)
            # update mapping
            node_to_ssa[parent] = ssa
            ssa += 1

        return tuple(path)

    path = deprecated(get_path, "path", "get_path")

    def get_numpy_path(self, order=None):
        """Generate a path compatible with the `optimize` kwarg of
        `numpy.einsum`.
        """
        return ["einsum_path", *self.get_path(order=order)]

    def get_ssa_path(self, order=None):
        """Generate a single static assignment path from the contraction tree.

        Parameters
        ----------
        order : None, "dfs", or callable, optional
            How to order the contractions within the tree. If a callable is
            given (which should take a node as its argument), try to contract
            nodes that minimize this function first.

        Returns
        -------
        ssa_path: tuple[tuple[int, int]]
        """
        ssa_path = []
        pos = dict(zip(self.gen_leaves(), range(self.N)))

        for parent, l, r in self.traverse(order=order):
            i, j = sorted((pos[l], pos[r]))
            ssa_path.append((i, j))
            pos[parent] = len(ssa_path) + self.N - 1

        return tuple(ssa_path)

    ssa_path = deprecated(get_ssa_path, "ssa_path", "get_ssa_path")

    def surface_order(self, node):
        return (self.get_extent(node), self.get_centrality(node))

    def set_surface_order_from_path(self, ssa_path):

        # first get dict from contractions to parents (don't usually store)
        parent_map = {}
        for p, l, r in self.traverse():
            parent_map[frozenset([l, r])] = p

        # then traverse up in given ssa_path order,
        # assigning parent node ordering 'score' incrementally
        parent_scores = {}
        node_map = {i: n for i, n in enumerate(self.gen_leaves())}
        for j, p in enumerate(ssa_path):
            lr = frozenset(node_map[i] for i in p)
            p = parent_map[lr]
            parent_scores[p] = j
            node_map[self.N + j] = p

        self.surface_order = functools.partial(
            get_with_default, obj=parent_scores, default=float("inf")
        )

    def get_path_surface(self):
        return self.get_path(order=self.surface_order)

    path_surface = deprecated(
        get_path_surface, "path_surface", "get_path_surface"
    )

    def get_ssa_path_surface(self):
        return self.get_ssa_path(order=self.surface_order)

    ssa_path_surface = deprecated(
        get_ssa_path_surface, "ssa_path_surface", "get_ssa_path_surface"
    )

    def get_spans(self):
        """Get all (which could mean none) potential embeddings of this
        contraction tree into a spanning tree of the original graph.

        Returns
        -------
        tuple[dict[frozenset[int], frozenset[int]]]
        """
        ind_to_term = collections.defaultdict(set)
        for i, term in enumerate(self.inputs):
            for ix in term:
                ind_to_term[ix].add(i)

        def boundary_pairs(node):
            """Get nodes along the boundary of the bipartition represented by
            ``node``.
            """
            pairs = set()
            involved = self.get_involved(node)
            legs = self.get_legs(node)
            removed = [ix for ix in involved if ix not in legs]
            for ix in removed:
                # for every index across the contraction
                l1, l2 = ind_to_term[ix]

                # can either span from left to right or right to left
                pairs.add((l1, l2))
                pairs.add((l2, l1))

            return pairs

        # first span choice is any nodes across the top level bipart
        candidates = [
            {
                # which intermedate nodes map to which leaf nodes
                "map": {self.root: self.input_to_node(l2)},
                # the leaf nodes in the spanning tree
                "spine": {l1, l2},
            }
            for l1, l2 in boundary_pairs(self.root)
        ]

        for _, l, r in self.descend():
            for child in (r, l):
                # for each current candidate check all the possible extensions
                for _ in range(len(candidates)):
                    cand = candidates.pop(0)

                    # don't need to do anything for
                    if self.is_leaf(child):
                        candidates.append(
                            {
                                "map": {child: child, **cand["map"]},
                                "spine": cand["spine"].copy(),
                            }
                        )

                    for l1, l2 in boundary_pairs(child):
                        if (l1 in cand["spine"]) or (l2 not in cand["spine"]):
                            # pair does not merge inwards into spine
                            continue

                        # valid extension of spanning tree
                        candidates.append(
                            {
                                "map": {
                                    child: self.input_to_node(l2),
                                    **cand["map"],
                                },
                                "spine": cand["spine"] | {l1, l2},
                            }
                        )

        return tuple(c["map"] for c in candidates)

    def compute_centralities(self, combine="mean"):
        """Compute a centrality for every node in this contraction tree."""
        hg = self.get_hypergraph(accel="auto")
        cents = hg.simple_centrality()

        for i, leaf in enumerate(self.gen_leaves()):
            self.info[leaf]["centrality"] = cents[i]

        combine = {
            "mean": lambda x, y: (x + y) / 2,
            "sum": lambda x, y: x + y,
            "max": max,
            "min": min,
        }.get(combine, combine)

        for p, l, r in self.traverse("dfs"):
            self.info[p]["centrality"] = combine(
                self.info[l]["centrality"], self.info[r]["centrality"]
            )

    def get_hypergraph(self, accel=False):
        """Get a hypergraph representing the uncontracted network (i.e. the
        leaves).
        """
        return get_hypergraph(self.inputs, self.output, self.size_dict, accel)

    def reset_contraction_indices(self):
        """Reset all information regarding a) the explicit contraction indices
        ordering and b) cached contraction expressions. This should probably be
        called any time structural changes are made to the tree, e.g.
        reconfiguration.
        """
        # delete all derived information
        # (note legs, involved, etc. are order invariant so we can keep those)
        for node in self.children:
            for k in (
                "inds",
                "einsum_eq",
                "can_dot",
                "tensordot_axes",
                "tensordot_perm",
            ):
                self.info[node].pop(k, None)

        # invalidate any compiled contractions
        self.contraction_cores.clear()

    def sort_contraction_indices(
        self,
        priority="flops",
        make_output_contig=True,
        make_contracted_contig=True,
        reset=True,
    ):
        """Set explicit orders for the contraction indices of this self to
        optimize for one of two things: contiguity in contracted ('k') indices,
        or contiguity of left and right output ('m' and 'n') indices.

        Parameters
        ----------
        priority : {'flops', 'size', 'root', 'leaves'}, optional
            Which order to process the intermediate nodes in. Later nodes
            re-sort previous nodes so are more likely to keep their ordering.
            E.g. for 'flops' the mostly costly contracton will be process last
            and thus will be guaranteed to have its indices exactly sorted.
        make_output_contig : bool, optional
            When processing a pairwise contraction, sort the parent contraction
            indices so that the order of indices is the order they appear
            from left to right in the two child (input) tensors.
        make_contracted_contig : bool, optional
            When processing a pairwise contraction, sort the child (input)
            tensor indices so that all contracted indices appear contiguously.
        reset : bool, optional
            Reset all indices to the default order before sorting.
        """
        if reset:
            self.reset_contraction_indices()

        if priority == "flops":
            nodes = sorted(
                self.children.items(), key=lambda x: self.get_flops(x[0])
            )
        elif priority == "size":
            nodes = sorted(
                self.children.items(), key=lambda x: self.get_size(x[0])
            )
        elif priority == "root":
            nodes = ((p, (l, r)) for p, l, r in self.traverse())
        elif priority == "leaves":
            nodes = ((p, (l, r)) for p, l, r in self.descend())
        else:
            raise ValueError(priority)

        for p, (l, r) in nodes:
            p_inds, l_inds, r_inds = map(self.get_inds, (p, l, r))

            if make_output_contig and not self.is_root(p):
                # sort indices by whether they appear in the left or right
                # whether this happens before or after the sort below depends
                # on the order we are processing the nodes
                # (avoid root as don't want to modify output)

                def psort(ix):
                    # group by whether in left or right input
                    return (r_inds.find(ix), l_inds.find(ix))

                p_inds = "".join(sorted(p_inds, key=psort))
                self.info[p]["inds"] = p_inds

            if make_contracted_contig:
                # sort indices by:
                # 1. if they are going to be contracted
                # 2. what order they appear in the parent indices
                # (but ignore leaf indices)
                if not self.is_leaf(l):

                    def lsort(ix):
                        return (r_inds.find(ix), p_inds.find(ix))

                    l_inds = "".join(sorted(self.get_legs(l), key=lsort))
                    self.info[l]["inds"] = l_inds

                if not self.is_leaf(r):

                    def rsort(ix):
                        return (p_inds.find(ix), l_inds.find(ix))

                    r_inds = "".join(sorted(self.get_legs(r), key=rsort))
                    self.info[r]["inds"] = r_inds

        if not reset:
            # still need to invalidate any cached contraction expressions
            self.contraction_cores.clear()

    def print_contractions(self, sort=None, show_brackets=True):
        """Print each pairwise contraction, with colorized indices (if
        `colorama` is installed), and other information. The color codes are:

        - blue: index appears on left and is kept
        - green: index appears on right and is kept
        - red: contracted index: appears on both sides and is removed
        - pink: batch index: appears on both sides and is kept

        Any trivial indices that appear only on one term and not in the output
        are removed and shown by the preprocessing steps.

        Parameters
        ----------
        sort : {'flops', 'size'}, optional
            Sort the contractions by either the number of floating point
            operations or the size of the intermediate tensor. By default the
            contraction are show in the order they are performed.
        show_brackets : bool, optional
            Whether to show the brackets around contiguous sections of the same
            type of indices.
        """
        try:
            from colorama import Fore

            RESET = Fore.RESET
            GREY = Fore.WHITE
            PINK = Fore.MAGENTA
            RED = Fore.RED
            BLUE = Fore.BLUE
            GREEN = Fore.GREEN
        except ImportError:
            RESET = GREY = PINK = RED = BLUE = GREEN = ""

        entries = []

        if self.has_preprocessing():
            for pi, eq in self.preprocessing.items():
                # eq is with canonical indices, reinsert original inputs
                replacer = dict(zip(eq.split("->")[0], self.inputs[pi]))
                eq = "".join(replacer.get(c, c) for c in eq)
                print(f"{GREY}preprocess input {pi}: {RESET}{eq}")
            print()

        for i, (p, l, r) in enumerate(self.traverse()):
            p_legs, l_legs, r_legs = map(self.get_legs, [p, l, r])
            p_inds, l_inds, r_inds = map(self.get_inds, [p, l, r])

            # print sizes and flops
            p_flops = self.get_flops(p)
            p_sz, l_sz, r_sz = (
                math.log2(self.get_size(node)) for node in [p, l, r]
            )
            # print whether tensordottable
            if self.get_can_dot(p):
                type_msg = "tensordot"
                perm = self.get_tensordot_perm(p)
                if perm is not None:
                    # and whether indices match tensordot
                    type_msg += "+perm"
            else:
                type_msg = "einsum"

            kpt_brck_l = "(" if show_brackets else ""
            kpt_brck_r = ")" if show_brackets else ""
            con_brck_l = "[" if show_brackets else ""
            con_brck_r = "]" if show_brackets else ""
            hyp_brck_l = "{" if show_brackets else ""
            hyp_brck_r = "}" if show_brackets else ""

            pa = (
                "".join(
                    PINK + f"{hyp_brck_l}{ix}{hyp_brck_r}"
                    if (ix in l_legs) and (ix in r_legs)
                    else GREEN + f"{kpt_brck_l}{ix}{kpt_brck_r}"
                    if ix in r_legs
                    else BLUE + ix
                    for ix in p_inds
                )
                .replace(f"){GREEN}(", "")
                .replace(f"}}{PINK}{{", "")
            )
            la = (
                "".join(
                    PINK + f"{hyp_brck_l}{ix}{hyp_brck_r}"
                    if (ix in p_legs) and (ix in r_legs)
                    else RED + f"{con_brck_l}{ix}{con_brck_r}"
                    if ix in r_legs
                    else BLUE + ix
                    for ix in l_inds
                )
                .replace(f"]{RED}[", "")
                .replace(f"}}{PINK}{{", "")
            )
            ra = (
                "".join(
                    PINK + f"{hyp_brck_l}{ix}{hyp_brck_r}"
                    if (ix in p_legs) and (ix in l_legs)
                    else RED + f"{con_brck_l}{ix}{con_brck_r}"
                    if ix in l_legs
                    else GREEN + ix
                    for ix in r_inds
                )
                .replace(f"]{RED}[", "")
                .replace(f"}}{PINK}{{", "")
            )

            entries.append(
                (
                    p,
                    f"{GREY}({i}) cost: {RESET}{p_flops:.1e} "
                    f"{GREY}widths: {RESET}{l_sz:.1f},{r_sz:.1f}->{p_sz:.1f} "
                    f"{GREY}type: {RESET}{type_msg}\n"
                    f"{GREY}inputs: {la},{ra}{RESET}->\n"
                    f"{GREY}output: {pa}\n",
                )
            )

        if sort == "flops":
            entries.sort(key=lambda x: self.get_flops(x[0]), reverse=True)
        if sort == "size":
            entries.sort(key=lambda x: self.get_size(x[0]), reverse=True)

        entries.append((None, f"{RESET}"))

        o = "\n".join(entry for _, entry in entries)
        print(o)

    # --------------------- Performing the Contraction ---------------------- #

    def get_contractor(
        self,
        order=None,
        prefer_einsum=False,
        strip_exponent=False,
        check_zero=False,
        implementation=None,
        autojit=False,
        progbar=False,
    ):
        """Get a reusable function which performs the contraction corresponding
        to this tree, cached.

        Parameters
        ----------
        tree : ContractionTree
            The contraction tree.
        order : str or callable, optional
            Supplied to :meth:`ContractionTree.traverse`, the order in which
            to perform the pairwise contractions given by the tree.
        prefer_einsum : bool, optional
            Prefer to use ``einsum`` for pairwise contractions, even if
            ``tensordot`` can perform the contraction.
        strip_exponent : bool, optional
            If ``True``, the function will eagerly strip the exponent (in
            log10) from intermediate tensors to control numerical problems from
            leaving the range of the datatype. This method then returns the
            scaled 'mantissa' output array and the exponent separately.
        check_zero : bool, optional
            If ``True``, when ``strip_exponent=True``, explicitly check for
            zero-valued intermediates that would otherwise produce ``nan``,
            instead terminating early if encountered and returning
            ``(0.0, 0.0)``.
        implementation : str or tuple[callable, callable], optional
            What library to use to actually perform the contractions. Options
            are:

            - None: let cotengra choose.
            - "autoray": dispatch with autoray, using the ``tensordot`` and
              ``einsum`` implementation of the backend.
            - "cotengra": use the ``tensordot`` and ``einsum`` implementation
              of cotengra, which is based on batch matrix multiplication. This
              is faster for some backends like numpy, and also enables
              libraries which don't yet provide ``tensordot`` and ``einsum`` to
              be used.
            - "cuquantum": use the cuquantum library to perform the whole
              contraction (not just individual contractions).
            - tuple[callable, callable]: manually supply the ``tensordot`` and
              ``einsum`` implementations to use.

        autojit : bool, optional
            If ``True``, use :func:`autoray.autojit` to compile the contraction
            function.
        progbar : bool, optional
            Whether to show progress through the contraction by default.

        Returns
        -------
        fn : callable
            The contraction function, with signature ``fn(*arrays)``.
        """
        key = (
            autojit,
            order,
            prefer_einsum,
            strip_exponent,
            check_zero,
            implementation,
            progbar,
        )
        try:
            fn = self.contraction_cores[key]
        except KeyError:
            fn = self.contraction_cores[key] = make_contractor(
                tree=self,
                order=order,
                prefer_einsum=prefer_einsum,
                strip_exponent=strip_exponent,
                check_zero=check_zero,
                implementation=implementation,
                autojit=autojit,
                progbar=progbar,
            )

        return fn

    def contract_core(
        self,
        arrays,
        order=None,
        prefer_einsum=False,
        strip_exponent=False,
        check_zero=False,
        backend=None,
        implementation=None,
        autojit="auto",
        progbar=False,
    ):
        """Contract ``arrays`` with this tree. The order of the axes and
        output is assumed to be that of ``tree.inputs`` and ``tree.output``,
        but with sliced indices removed. This functon contracts the core tree
        and thus if indices have been sliced the arrays supplied need to be
        sliced as well.

        Parameters
        ----------
        arrays : sequence of array
            The arrays to contract.
        order : str or callable, optional
            Supplied to :meth:`ContractionTree.traverse`.
        prefer_einsum : bool, optional
            Prefer to use ``einsum`` for pairwise contractions, even if
            ``tensordot`` can perform the contraction.
        backend : str, optional
            What library to use for ``einsum`` and ``transpose``, will be
            automatically inferred from the arrays if not given.
        autojit : "auto" or bool, optional
            Whether to use ``autoray.autojit`` to jit compile the expression.
            If "auto", then let ``cotengra`` choose.
        progbar : bool, optional
            Show progress through the contraction.
        """
        if autojit == "auto":
            # choose for the user
            autojit = backend == "jax"

        fn = self.get_contractor(
            order=order,
            prefer_einsum=prefer_einsum,
            strip_exponent=strip_exponent is not False,
            implementation=implementation,
            autojit=autojit,
            check_zero=check_zero,
            progbar=progbar,
        )
        return fn(*arrays, backend=backend)

    def slice_key(self, i, strides=None):
        """Get the combination of sliced index values for overall slice ``i``.

        Parameters
        ----------
        i : int
            The overall slice index.

        Returns
        -------
        key : dict[str, int]
            The value each sliced index takes for slice ``i``.
        """
        if strides is None:
            strides = get_slice_strides(self.sliced_inds)

        key = {}
        for (ind, info), stride in zip(self.sliced_inds.items(), strides):
            if info.project is None:
                key[ind] = i // stride
                i %= stride
            else:
                # size is 1 and i doesn't change
                key[ind] = info.project

        return key

    def slice_arrays(self, arrays, i):
        """Take ``arrays`` and slice the relevant inputs according to
        ``tree.sliced_inds`` and the dynary representation of ``i``.
        """
        temp_arrays = list(arrays)

        # e.g. {'a': 2, 'd': 7, 'z': 0}
        locations = self.slice_key(i)

        for c in self.sliced_inputs:
            # the indexing object, e.g. [:, :, 7, :, 2, :, :, 0]
            selector = tuple(
                locations.get(ix, slice(None)) for ix in self.inputs[c]
            )
            # re-insert the sliced array
            temp_arrays[c] = temp_arrays[c][selector]

        return temp_arrays

    def contract_slice(self, arrays, i, **kwargs):
        """Get slices ``i`` of ``arrays`` and then contract them."""
        return self.contract_core(self.slice_arrays(arrays, i), **kwargs)

    def gather_slices(self, slices, backend=None, progbar=False):
        """Gather all the output contracted slices into a single full result.
        If none of the sliced indices appear in the output, then this is a
        simple sum - otherwise the slices need to be partially summed and
        partially stacked.
        """
        if progbar:
            import tqdm

            slices = tqdm.tqdm(slices, total=self.multiplicity)

        output_pos = {
            ix: i for i, ix in enumerate(self.output) if ix in self.sliced_inds
        }

        add_maybe_exponent_stripped = AdderWithMaybeExponentStripped()

        if not output_pos:
            # we can just sum everything
            return functools.reduce(add_maybe_exponent_stripped, slices)

        # first we sum over non-output sliced indices
        chunks = {}
        for i, s in enumerate(slices):
            key_slice = self.slice_key(i)
            key = tuple(key_slice[ix] for ix in output_pos)
            try:
                chunks[key] = add_maybe_exponent_stripped(chunks[key], s)
            except KeyError:
                chunks[key] = s

        if isinstance(next(iter(chunks.values())), tuple):
            # have stripped exponents, need to scale to largest
            emax = max(v[1] for v in chunks.values())
            chunks = {
                k: mi * 10 ** (ei - emax) for k, (mi, ei) in chunks.items()
            }
        else:
            emax = None

        # then we stack these summed chunks over output sliced indices
        def recursively_stack_chunks(loc, remaining):
            if not remaining:
                return chunks[loc]
            arrays = [
                recursively_stack_chunks(loc + (d,), remaining[1:])
                for d in self.sliced_inds[remaining[0]].sliced_range
            ]
            axes = output_pos[remaining[0]] - len(loc)
            return do("stack", arrays, axes, like=backend)

        result = recursively_stack_chunks((), tuple(output_pos))

        if emax is not None:
            # strip_exponent was True, return the exponent separately
            return result, emax

        return result

    def gen_output_chunks(
        self, arrays, with_key=False, progbar=False, **contract_opts
    ):
        """Generate each output chunk of the contraction - i.e. take care of
        summing internally sliced indices only first. This assumes that the
        ``sliced_inds`` are sorted by whether they appear in the output or not
        (the default order). Useful for performing some kind of reduction over
        the final tensor object like  ``fn(x).sum()`` without constructing the
        entire thing.

        Parameters
        ----------
        arrays : sequence of array
            The arrays to contract.
        with_key : bool, optional
            Whether to yield the output index configuration key along with the
            chunk.
        progbar : bool, optional
            Show progress through the contraction chunks.

        Yields
        ------
        chunk : array
            A chunk of the contracted result.
        key : dict[str, int]
            The value each sliced output index takes for this chunk.
        """
        # consecutive slices of size ``stepsize`` all belong to the same output
        # block because the sliced indices are sorted output first
        stepsize = prod(
            si.size for si in self.sliced_inds.values() if si.inner
        )

        if progbar:
            import tqdm

            it = tqdm.trange(self.nslices // stepsize)
        else:
            it = range(self.nslices // stepsize)

        for o in it:
            chunk = self.contract_slice(arrays, o * stepsize, **contract_opts)

            if with_key:
                output_key = {
                    ix: x
                    for ix, x in self.slice_key(o * stepsize).items()
                    if ix in self.output
                }

            for j in range(1, stepsize):
                i = o * stepsize + j
                chunk = chunk + self.contract_slice(arrays, i, **contract_opts)

            if with_key:
                yield chunk, output_key
            else:
                yield chunk

    def contract(
        self,
        arrays,
        order=None,
        prefer_einsum=False,
        strip_exponent=False,
        check_zero=False,
        backend=None,
        implementation=None,
        autojit="auto",
        progbar=False,
    ):
        """Contract ``arrays`` with this tree. This function takes *unsliced*
        arrays and handles the slicing, contractions and gathering. The order
        of the axes and output is assumed to match that of ``tree.inputs`` and
        ``tree.output``.

        Parameters
        ----------
        arrays : sequence of array
            The arrays to contract.
        order : str or callable, optional
            Supplied to :meth:`ContractionTree.traverse`.
        prefer_einsum : bool, optional
            Prefer to use ``einsum`` for pairwise contractions, even if
            ``tensordot`` can perform the contraction.
        strip_exponent : bool, optional
            If ``True``, eagerly strip the exponent (in log10) from
            intermediate tensors to control numerical problems from leaving the
            range of the datatype. This method then returns the scaled
            'mantissa' output array and the exponent separately.
        check_zero : bool, optional
            If ``True``, when ``strip_exponent=True``, explicitly check for
            zero-valued intermediates that would otherwise produce ``nan``,
            instead terminating early if encountered and returning
            ``(0.0, 0.0)``.
        backend : str, optional
            What library to use for ``tensordot``, ``einsum`` and
            ``transpose``, it will be automatically inferred from the input
            arrays if not given.
        autojit : bool, optional
            Whether to use the 'autojit' feature of `autoray` to compile the
            contraction expression.
        progbar : bool, optional
            Whether to show a progress bar.

        Returns
        -------
        output : array
            The contracted output, it will be scaled if
            ``strip_exponent==True``.
        exponent : float
            The exponent of the output in base 10, returned only if
            ``strip_exponent==True``.

        See Also
        --------
        contract_core, contract_slice, slice_arrays, gather_slices
        """
        if not self.sliced_inds:
            return self.contract_core(
                arrays,
                order=order,
                prefer_einsum=prefer_einsum,
                strip_exponent=strip_exponent,
                check_zero=check_zero,
                backend=backend,
                implementation=implementation,
                autojit=autojit,
                progbar=progbar,
            )

        slices = (
            self.contract_slice(
                arrays,
                i,
                order=order,
                prefer_einsum=prefer_einsum,
                strip_exponent=strip_exponent,
                check_zero=check_zero,
                backend=backend,
                implementation=implementation,
                autojit=autojit,
            )
            for i in range(self.multiplicity)
        )

        return self.gather_slices(slices, backend=backend, progbar=progbar)

    def contract_mpi(self
Download .txt
gitextract_lc2zkv6z/

├── .codecov.yml
├── .gitattributes
├── .github/
│   ├── dependabot.yml
│   └── workflows/
│       ├── pypi-release.yml
│       └── test.yml
├── .gitignore
├── .readthedocs.yml
├── LICENSE.md
├── MANIFEST.in
├── README.md
├── cotengra/
│   ├── __init__.py
│   ├── contract.py
│   ├── core.py
│   ├── core_multi.py
│   ├── experimental/
│   │   ├── __init__.py
│   │   ├── hyper_de.py
│   │   ├── hyper_pe.py
│   │   ├── hyper_pymoo.py
│   │   ├── hyper_scipy.py
│   │   ├── hyper_smac.py
│   │   ├── multi.ipynb
│   │   ├── path_compressed_branchbound.py
│   │   ├── path_compressed_mcts.py
│   │   └── scoring.py
│   ├── hypergraph.py
│   ├── hyperoptimizers/
│   │   ├── __init__.py
│   │   ├── _param_mapping.py
│   │   ├── hyper.py
│   │   ├── hyper_cmaes.py
│   │   ├── hyper_es.py
│   │   ├── hyper_neldermead.py
│   │   ├── hyper_nevergrad.py
│   │   ├── hyper_optuna.py
│   │   ├── hyper_random.py
│   │   ├── hyper_sbplx.py
│   │   └── hyper_skopt.py
│   ├── interface.py
│   ├── nodeops.py
│   ├── oe.py
│   ├── parallel.py
│   ├── pathfinders/
│   │   ├── __init__.py
│   │   ├── kahypar_profiles/
│   │   │   ├── cut_kKaHyPar_sea20.ini
│   │   │   ├── cut_rKaHyPar_sea20.ini
│   │   │   ├── km1_kKaHyPar_sea20.ini
│   │   │   ├── km1_rKaHyPar_sea20.ini
│   │   │   └── old/
│   │   │       ├── cut_kKaHyPar_sea20.ini
│   │   │       ├── cut_rKaHyPar_sea20.ini
│   │   │       ├── km1_kKaHyPar_sea20.ini
│   │   │       └── km1_rKaHyPar_sea20.ini
│   │   ├── path_basic.py
│   │   ├── path_compressed.py
│   │   ├── path_compressed_greedy.py
│   │   ├── path_edgesort.py
│   │   ├── path_flowcutter.py
│   │   ├── path_greedy.py
│   │   ├── path_igraph.py
│   │   ├── path_kahypar.py
│   │   ├── path_labels.py
│   │   ├── path_quickbb.py
│   │   ├── path_random.py
│   │   ├── path_simulated_annealing.py
│   │   └── treedecomp.py
│   ├── plot.py
│   ├── presets.py
│   ├── reusable.py
│   ├── schematic.py
│   ├── scoring.py
│   ├── slicer.py
│   └── utils.py
├── docs/
│   ├── Makefile
│   ├── _pygments/
│   │   ├── _pygments_dark.py
│   │   └── _pygments_light.py
│   ├── _static/
│   │   └── my-styles.css
│   ├── advanced.ipynb
│   ├── basics.ipynb
│   ├── changelog.md
│   ├── conf.py
│   ├── contraction.ipynb
│   ├── examples/
│   │   ├── ex_compressed_contraction.ipynb
│   │   ├── ex_large_output_lazy.ipynb
│   │   └── ex_trace_contraction_to_matmuls.ipynb
│   ├── high-level-interface.ipynb
│   ├── index.md
│   ├── index_examples.md
│   ├── installation.md
│   ├── make.bat
│   ├── trees.ipynb
│   └── visualization.ipynb
├── examples/
│   ├── Example - Reproducing 2005.06787.ipynb
│   ├── Example - Reproducing 2103-03074.ipynb
│   ├── Quantum Circuit Example Old.ipynb
│   ├── Quantum Circuit Example.ipynb
│   ├── benchmarks/
│   │   ├── cubic_6x6x10.json
│   │   ├── mps_mpo_L100_chi64_D5.json
│   │   ├── peps_cluster_r2_D10_a.json
│   │   ├── qucirc_rrzz_n56_d13.json
│   │   ├── rand_50_5_a.json
│   │   ├── randreg_200_3_a.json
│   │   ├── rtree_100_a.json
│   │   └── sycamore_n53_m20_s0_e0_pABCDCDAB.json
│   ├── circuit_n53_m10_s0_e0_pABCDCDAB.qsim
│   ├── circuit_n53_m12_s0_e0_pABCDCDAB.qsim
│   ├── circuit_n53_m20_s0_e0_pABCDCDAB.qsim
│   ├── ex_jax.py
│   ├── ex_mpi_executor.py
│   └── ex_mpi_spmd.py
├── pyproject.toml
└── tests/
    ├── __init__.py
    ├── test_backends.py
    ├── test_compressed.py
    ├── test_compute.py
    ├── test_hypergraph.py
    ├── test_interface.py
    ├── test_optimizers.py
    ├── test_parallel.py
    ├── test_paths_basic.py
    ├── test_slicer.py
    └── test_tree.py
Download .txt
SYMBOL INDEX (1396 symbols across 60 files)

FILE: cotengra/__init__.py
  function hyper_optimize (line 214) | def hyper_optimize(
  function hyper_compressed_optimize (line 236) | def hyper_compressed_optimize(
  function random_greedy_optimize (line 253) | def random_greedy_optimize(

FILE: cotengra/contract.py
  function set_default_implementation (line 13) | def set_default_implementation(impl):
  function get_default_implementation (line 18) | def get_default_implementation():
  function default_implementation (line 23) | def default_implementation(impl):
  function _sanitize_equation (line 35) | def _sanitize_equation(eq):
  function _parse_einsum_single (line 62) | def _parse_einsum_single(eq, shape):
  function _parse_eq_to_pure_multiplication (line 122) | def _parse_eq_to_pure_multiplication(a_term, shape_a, b_term, shape_b, o...
  function _parse_eq_to_batch_matmul (line 168) | def _parse_eq_to_batch_matmul(eq, shape_a, shape_b):
  function _einsum_single (line 332) | def _einsum_single(eq, x, backend=None):
  function _do_contraction_via_bmm (line 364) | def _do_contraction_via_bmm(
  function einsum (line 414) | def einsum(eq, a, b=None, *, backend=None):
  function gen_nice_inds (line 462) | def gen_nice_inds():
  function _parse_tensordot_axes_to_matmul (line 473) | def _parse_tensordot_axes_to_matmul(axes, shape_a, shape_b):
  function tensordot (line 521) | def tensordot(a, b, axes=2, *, backend=None):
  function extract_contractions (line 573) | def extract_contractions(
  class Contractor (line 654) | class Contractor:
    method __init__ (line 702) | def __init__(
    method __call__ (line 718) | def __call__(self, *arrays, **kwargs):
  class CuQuantumContractor (line 840) | class CuQuantumContractor:
    method __init__ (line 841) | def __init__(
    method setup (line 883) | def setup(self, *arrays):
    method __call__ (line 901) | def __call__(
    method __del__ (line 920) | def __del__(self):
  function make_contractor (line 925) | def make_contractor(

FILE: cotengra/core.py
  function cached_node_property (line 59) | def cached_node_property(name):
  function legs_union (line 76) | def legs_union(legs_seq):
  function legs_without (line 88) | def legs_without(legs, ind):
  function get_with_default (line 95) | def get_with_default(k, obj, default):
  class SliceInfo (line 100) | class SliceInfo:
    method sliced_range (line 107) | def sliced_range(self):
  function get_slice_strides (line 114) | def get_slice_strides(sliced_inds):
  class AdderWithMaybeExponentStripped (line 125) | class AdderWithMaybeExponentStripped:
    method __init__ (line 133) | def __init__(self):
    method __call__ (line 138) | def __call__(self, x, y):
  class ContractionTree (line 175) | class ContractionTree:
    method __init__ (line 215) | def __init__(
    method set_state_from (line 318) | def set_state_from(self, other):
    method copy (line 368) | def copy(self):
    method set_default_objective (line 374) | def set_default_objective(self, objective):
    method get_default_objective (line 378) | def get_default_objective(self):
    method get_default_combo_factor (line 384) | def get_default_combo_factor(self):
    method get_score (line 392) | def get_score(self, objective=None):
    method nslices (line 404) | def nslices(self):
    method nchunks (line 411) | def nchunks(self):
    method input_to_node (line 419) | def input_to_node(self, i):
    method node_to_input (line 434) | def node_to_input(self, node):
    method node_to_terms (line 449) | def node_to_terms(self, node):
    method gen_leaves (line 458) | def gen_leaves(self):
    method get_incomplete_nodes (line 464) | def get_incomplete_nodes(self):
    method autocomplete (line 517) | def autocomplete(self, **contract_opts):
    method from_path (line 537) | def from_path(
    method from_info (line 639) | def from_info(cls, info, **kwargs):
    method from_eq (line 650) | def from_eq(cls, eq, size_dict, **kwargs):
    method get_eq (line 665) | def get_eq(self):
    method get_shapes (line 676) | def get_shapes(self):
    method get_inputs_sliced (line 687) | def get_inputs_sliced(self):
    method get_output_sliced (line 700) | def get_output_sliced(self):
    method get_eq_sliced (line 710) | def get_eq_sliced(self):
    method get_shapes_sliced (line 722) | def get_shapes_sliced(self):
    method from_edge_path (line 738) | def from_edge_path(
    method _add_node (line 765) | def _add_node(self, node, check=False, **kwargs):
    method _remove_node (line 830) | def _remove_node(self, node):
    method compute_leaf_legs (line 861) | def compute_leaf_legs(self, i):
    method has_preprocessing (line 906) | def has_preprocessing(self):
    method has_hyper_indices (line 912) | def has_hyper_indices(self):
    method get_extent (line 920) | def get_extent(self, node):
    method get_subgraph (line 940) | def get_subgraph(self, node) -> tuple[int, ...]:
    method get_legs (line 970) | def get_legs(self, node):
    method get_involved (line 1002) | def get_involved(self, node):
    method get_size (line 1010) | def get_size(self, node):
    method get_flops (line 1015) | def get_flops(self, node):
    method get_can_dot (line 1025) | def get_can_dot(self, node):
    method get_inds (line 1035) | def get_inds(self, node):
    method get_tensordot_axes (line 1054) | def get_tensordot_axes(self, node):
    method get_tensordot_perm (line 1069) | def get_tensordot_perm(self, node):
    method get_einsum_eq (line 1083) | def get_einsum_eq(self, node):
    method is_leaf (line 1097) | def is_leaf(self, node):
    method is_root (line 1111) | def is_root(self, node):
    method is_descendant (line 1125) | def is_descendant(self, node, ancestor):
    method get_peak_size (line 1141) | def get_peak_size(self, node):
    method reorder_for_peak_size (line 1170) | def reorder_for_peak_size(self):
    method get_centrality (line 1189) | def get_centrality(self, node):
    method total_flops (line 1196) | def total_flops(self, dtype=None, log=None):
    method total_write (line 1229) | def total_write(self):
    method combo_cost (line 1240) | def combo_cost(self, factor=DEFAULT_COMBO_FACTOR, combine=sum, log=None):
    method max_size (line 1256) | def max_size(self, log=None):
    method max_contraction_size (line 1274) | def max_contraction_size(self, log=None):
    method peak_size (line 1299) | def peak_size(self, order=None, log=None):
    method contract_stats (line 1318) | def contract_stats(self, force=False):
    method arithmetic_intensity (line 1349) | def arithmetic_intensity(self):
    method contraction_scaling (line 1355) | def contraction_scaling(self):
    method contraction_cost (line 1362) | def contraction_cost(self, log=None):
    method naive_cost (line 1366) | def naive_cost(self, log=None):
    method speedup (line 1383) | def speedup(self, log=None):
    method contraction_width (line 1398) | def contraction_width(self, log=2):
    method compressed_contract_stats (line 1402) | def compressed_contract_stats(
    method total_flops_compressed (line 1448) | def total_flops_compressed(
    method total_write_compressed (line 1479) | def total_write_compressed(
    method combo_cost_compressed (line 1502) | def combo_cost_compressed(
    method max_size_compressed (line 1526) | def max_size_compressed(
    method peak_size_compressed (line 1545) | def peak_size_compressed(
    method contraction_width_compressed (line 1569) | def contraction_width_compressed(
    method _update_tracked (line 1578) | def _update_tracked(self, node):
    method contract_nodes_pair (line 1586) | def contract_nodes_pair(
    method contract_nodes (line 1668) | def contract_nodes(
    method is_complete (line 1755) | def is_complete(self):
    method get_default_order (line 1778) | def get_default_order(self):
    method _traverse_dfs (line 1781) | def _traverse_dfs(self):
    method _traverse_ordered (line 1801) | def _traverse_ordered(self, order):
    method traverse (line 1834) | def traverse(self, order=None):
    method descend (line 1866) | def descend(self, mode="dfs"):
    method get_subtree (line 1898) | def get_subtree(self, node, size, search="bfs", seed=None):
    method remove_ind (line 1966) | def remove_ind(self, ind, project=None, inplace=False):
    method restore_ind (line 2046) | def restore_ind(self, ind, inplace=False):
    method unslice_rand (line 2093) | def unslice_rand(self, seed=None, inplace=False):
    method unslice_all (line 2113) | def unslice_all(self, inplace=False):
    method calc_subtree_candidates (line 2134) | def calc_subtree_candidates(self, pwr=2, what="flops"):
    method _subtree_remove_and_optimize (line 2158) | def _subtree_remove_and_optimize(
    method _subtree_reconfigure_descend (line 2189) | def _subtree_reconfigure_descend(
    method _subtree_reconfigure_rand_select (line 2251) | def _subtree_reconfigure_rand_select(
    method subtree_reconfigure (line 2316) | def subtree_reconfigure(
    method subtree_reconfigure_forest (line 2439) | def subtree_reconfigure_forest(
    method slice (line 2620) | def slice(
    method slice_and_reconfigure (line 2711) | def slice_and_reconfigure(
    method slice_and_reconfigure_forest (line 2798) | def slice_and_reconfigure_forest(
    method compressed_reconfigure (line 2973) | def compressed_reconfigure(
    method windowed_reconfigure (line 3074) | def windowed_reconfigure(
    method flat_tree (line 3137) | def flat_tree(self, order=None):
    method get_leaves_ordered (line 3160) | def get_leaves_ordered(self):
    method get_path (line 3176) | def get_path(self, order=None):
    method get_numpy_path (line 3217) | def get_numpy_path(self, order=None):
    method get_ssa_path (line 3223) | def get_ssa_path(self, order=None):
    method surface_order (line 3249) | def surface_order(self, node):
    method set_surface_order_from_path (line 3252) | def set_surface_order_from_path(self, ssa_path):
    method get_path_surface (line 3273) | def get_path_surface(self):
    method get_ssa_path_surface (line 3280) | def get_ssa_path_surface(self):
    method get_spans (line 3287) | def get_spans(self):
    method compute_centralities (line 3362) | def compute_centralities(self, combine="mean"):
    method get_hypergraph (line 3382) | def get_hypergraph(self, accel=False):
    method reset_contraction_indices (line 3388) | def reset_contraction_indices(self):
    method sort_contraction_indices (line 3409) | def sort_contraction_indices(
    method print_contractions (line 3496) | def print_contractions(self, sort=None, show_brackets=True):
    method get_contractor (line 3626) | def get_contractor(
    method contract_core (line 3712) | def contract_core(
    method slice_key (line 3763) | def slice_key(self, i, strides=None):
    method slice_arrays (line 3790) | def slice_arrays(self, arrays, i):
    method contract_slice (line 3809) | def contract_slice(self, arrays, i, **kwargs):
    method gather_slices (line 3813) | def gather_slices(self, slices, backend=None, progbar=False):
    method gen_output_chunks (line 3872) | def gen_output_chunks(
    method contract (line 3931) | def contract(
    method contract_mpi (line 4020) | def contract_mpi(self, arrays, comm=None, root=None, **kwargs):
    method benchmark (line 4080) | def benchmark(
    method plot_hypergraph (line 4164) | def plot_hypergraph(self, **kwargs):
    method describe (line 4168) | def describe(self, info="normal", join=" "):
    method __repr__ (line 4201) | def __repr__(self):
    method __str__ (line 4213) | def __str__(self):
  function _reconfigure_tree (line 4221) | def _reconfigure_tree(tree, *args, **kwargs):
  function _slice_and_reconfigure_tree (line 4225) | def _slice_and_reconfigure_tree(tree, *args, **kwargs):
  function _get_tree_info (line 4229) | def _get_tree_info(tree):
  function _describe_tree (line 4235) | def _describe_tree(tree, info="normal"):
  class ContractionTreeCompressed (line 4239) | class ContractionTreeCompressed(ContractionTree):
    method set_state_from (line 4244) | def set_state_from(self, other):
    method from_path (line 4249) | def from_path(
    method get_default_order (line 4301) | def get_default_order(self):
    method get_default_objective (line 4304) | def get_default_objective(self):
    method get_default_chi (line 4309) | def get_default_chi(self):
    method get_default_compress_late (line 4321) | def get_default_compress_late(self):
    method get_contractor (line 4344) | def get_contractor(self, *_, **__):
    method simulated_anneal (line 4352) | def simulated_anneal(
  class PartitionTreeBuilder (line 4409) | class PartitionTreeBuilder:
    method __init__ (line 4425) | def __init__(self, partition_fn):
    method build_divide (line 4428) | def build_divide(
    method build_agglom (line 4539) | def build_agglom(
    method trial_fn (line 4607) | def trial_fn(self, inputs, output, size_dict, **partition_opts):
    method trial_fn_agglom (line 4610) | def trial_fn_agglom(self, inputs, output, size_dict, **partition_opts):
  function jitter (line 4614) | def jitter(x, strength, rng):
  function jitter_dict (line 4618) | def jitter_dict(d, strength, seed=None):
  function separate (line 4623) | def separate(xs, blocks):

FILE: cotengra/core_multi.py
  class ContractionTreeMulti (line 6) | class ContractionTreeMulti(ContractionTree):
    method __init__ (line 7) | def __init__(
    method set_state_from (line 23) | def set_state_from(self, other):
    method _remove_node (line 29) | def _remove_node(self, node):
    method _update_tracked (line 34) | def _update_tracked(self, node):
    method get_node_var_inds (line 40) | def get_node_var_inds(self, node):
    method get_node_is_bright (line 59) | def get_node_is_bright(self, node):
    method get_node_mult (line 75) | def get_node_mult(self, node):
    method get_node_cache_mult (line 81) | def get_node_cache_mult(self, node, sliced_ind_ordering):
    method get_flops (line 91) | def get_flops(self, node):
    method get_cache_contrib (line 98) | def get_cache_contrib(self, node):
    method peak_size (line 118) | def peak_size(self, log=None):
    method reorder_contractions_for_peak_est (line 135) | def reorder_contractions_for_peak_est(self):
    method reorder_sliced_inds (line 158) | def reorder_sliced_inds(self):
    method exact_multi_stats (line 167) | def exact_multi_stats(self, configs):

FILE: cotengra/experimental/hyper_de.py
  class HyperDESampler (line 13) | class HyperDESampler:
    method __init__ (line 46) | def __init__(
    method _mutate (line 85) | def _mutate(self, target_idx):
    method _sample_generation (line 114) | def _sample_generation(self):
    method _extend_generation (line 126) | def _extend_generation(self, target_idx=None):
    method ask (line 143) | def ask(self):
    method tell (line 163) | def tell(self, trial_number, score):
  class DEOptLib (line 196) | class DEOptLib(HyperOptLib):
    method setup (line 199) | def setup(
    method get_setting (line 281) | def get_setting(self):
    method report_result (line 292) | def report_result(self, setting, trial, score):

FILE: cotengra/experimental/hyper_pe.py
  class HyperPESampler (line 15) | class HyperPESampler:
    method __init__ (line 50) | def __init__(
    method _make_sigmas (line 92) | def _make_sigmas(self):
    method _sample_candidate (line 104) | def _sample_candidate(self, worker_idx, noise=None):
    method _sample_generation (line 139) | def _sample_generation(self):
    method _extend_generation (line 151) | def _extend_generation(self, worker_idx=None, noise=None):
    method ask (line 167) | def ask(self):
    method tell (line 187) | def tell(self, trial_number, score):
  class PEOptLib (line 250) | class PEOptLib(HyperOptLib):
    method setup (line 253) | def setup(
    method get_setting (line 339) | def get_setting(self):
    method report_result (line 350) | def report_result(self, setting, trial, score):

FILE: cotengra/experimental/hyper_pymoo.py
  function _get_pymoo_algorithm (line 18) | def _get_pymoo_algorithm(name):
  class HyperPymooSampler (line 60) | class HyperPymooSampler:
    method __init__ (line 63) | def __init__(
    method ask (line 106) | def ask(self):
    method tell (line 133) | def tell(self, trial_number, score):
  class PymooOptLib (line 150) | class PymooOptLib(HyperOptLib):
    method setup (line 153) | def setup(
    method get_setting (line 189) | def get_setting(self):
    method report_result (line 199) | def report_result(self, setting, trial, score):

FILE: cotengra/experimental/hyper_scipy.py
  class _StopOptimization (line 29) | class _StopOptimization(Exception):
  class ScipyAskTell (line 33) | class ScipyAskTell:
    method __init__ (line 51) | def __init__(self, method, bounds, **kwargs):
    method _get_optimizer_fn (line 64) | def _get_optimizer_fn(self):
    method _objective (line 69) | def _objective(self, x):
    method _run (line 78) | def _run(self):
    method start (line 93) | def start(self):
    method ask (line 98) | def ask(self):
    method tell (line 111) | def tell(self, score):
    method stop (line 115) | def stop(self):
  class HyperScipySampler (line 127) | class HyperScipySampler:
    method __init__ (line 152) | def __init__(
    method _make_worker (line 178) | def _make_worker(self):
    method _next_worker (line 188) | def _next_worker(self):
    method ask (line 207) | def ask(self):
    method tell (line 230) | def tell(self, trial_number, score):
    method stop (line 238) | def stop(self):
  class ScipyOptLib (line 244) | class ScipyOptLib(HyperOptLib):
    method setup (line 249) | def setup(
    method get_setting (line 312) | def get_setting(self):
    method report_result (line 322) | def report_result(self, setting, trial, score):
    method cleanup (line 328) | def cleanup(self):

FILE: cotengra/experimental/hyper_smac.py
  function build_config_space (line 9) | def build_config_space(method, space):
  function config_to_params (line 56) | def config_to_params(config):
  class SMACOptLib (line 61) | class SMACOptLib(HyperOptLib):
    method setup (line 66) | def setup(
    method get_setting (line 115) | def get_setting(self):
    method report_result (line 127) | def report_result(self, setting, trial, score):

FILE: cotengra/experimental/path_compressed_branchbound.py
  class CompressedExhaustive (line 14) | class CompressedExhaustive:
    method __init__ (line 48) | def __init__(
    method setup (line 97) | def setup(self, inputs, output, size_dict):
    method expand_node (line 124) | def expand_node(
    method _update_progbar (line 219) | def _update_progbar(self, pbar, c):
    method run (line 227) | def run(self, inputs, output, size_dict):
    method ssa_path (line 294) | def ssa_path(self):
    method path (line 298) | def path(self):
    method explore_path (line 301) | def explore_path(self, path, high_priority=True, restrict=False):
    method search (line 346) | def search(self, inputs, output, size_dict):
    method __call__ (line 356) | def __call__(self, inputs, output, size_dict):
  function do_reconfigure (line 362) | def do_reconfigure(tree, time, chi):
  class CompressedTreeRefiner (line 373) | class CompressedTreeRefiner:
    method __init__ (line 374) | def __init__(
    method _check_score (line 400) | def _check_score(self, key, tree, score=None):
    method _get_next_tree (line 409) | def _get_next_tree(self):
    method _get_next_result_seq (line 415) | def _get_next_result_seq(self):
    method _get_next_result_par (line 420) | def _get_next_result_par(self, max_futures):
    method _process_result (line 437) | def _process_result(self, tree, key, time, old, new):
    method refine (line 445) | def refine(self, num_its=None, bins=30):

FILE: cotengra/experimental/path_compressed_mcts.py
  class Node (line 10) | class Node:
    method __init__ (line 24) | def __init__(
    method update (line 45) | def update(self, x):
    method __hash__ (line 58) | def __hash__(self):
    method __lt__ (line 61) | def __lt__(self, other):
    method __repr__ (line 64) | def __repr__(self):
  class MCTS (line 74) | class MCTS:
    method __init__ (line 75) | def __init__(
    method __repr__ (line 100) | def __repr__(self):
    method setup (line 109) | def setup(self, inputs, output, size_dict):
    method get_ssa_path (line 133) | def get_ssa_path(self):
    method check_node (line 145) | def check_node(self, node):
    method delete_node (line 201) | def delete_node(self, node):
    method backprop (line 237) | def backprop(self, node):
    method simulate_node (line 255) | def simulate_node(self, node):
    method simulate_optimized (line 272) | def simulate_optimized(self, node):
    method is_deadend (line 307) | def is_deadend(self, node):
    method descend (line 315) | def descend(self):
    method ssa_path (line 340) | def ssa_path(self):
    method path (line 345) | def path(self):
    method run (line 349) | def run(self, inputs, output, size_dict):
    method search (line 364) | def search(self, inputs, output, size_dict):
    method __call__ (line 374) | def __call__(self, inputs, output, size_dict):

FILE: cotengra/experimental/scoring.py
  class CompressedTracedObjective (line 1) | class CompressedTracedObjective(Objective):
    method __init__ (line 2) | def __init__(self, chi, compress_late=False, r=1):
    method trace (line 7) | def trace(self, trial):
  class CompressedSizeTracedObjective (line 52) | class CompressedSizeTracedObjective(CompressedTracedObjective):
    method __init__ (line 53) | def __init__(self, secondary_weight=1e-3, **kwargs):
    method __call__ (line 57) | def __call__(self, trial):
  class CompressedPeakTracedObjective (line 66) | class CompressedPeakTracedObjective(CompressedTracedObjective):
    method __init__ (line 67) | def __init__(self, secondary_weight=1e-3, **kwargs):
    method __call__ (line 71) | def __call__(self, trial):
  class CompressedFlopsTracedObjective (line 80) | class CompressedFlopsTracedObjective(CompressedTracedObjective):
    method __init__ (line 81) | def __init__(self, secondary_weight=1e-3, **kwargs):
    method __call__ (line 85) | def __call__(self, trial):
  class CompressedComboTracedObjective (line 94) | class CompressedComboTracedObjective(CompressedTracedObjective):
    method __init__ (line 95) | def __init__(self, factor=DEFAULT_COMBO_FACTOR, **kwargs):
    method __call__ (line 99) | def __call__(self, trial):

FILE: cotengra/hypergraph.py
  class HyperGraph (line 24) | class HyperGraph:
    method __init__ (line 58) | def __init__(self, inputs, output=None, size_dict=None):
    method copy (line 75) | def copy(self):
    method from_edges (line 87) | def from_edges(cls, edges, output=(), size_dict=()):
    method get_num_nodes (line 104) | def get_num_nodes(self):
    method num_nodes (line 108) | def num_nodes(self):
    method get_num_edges (line 111) | def get_num_edges(self):
    method num_edges (line 115) | def num_edges(self):
    method __len__ (line 118) | def __len__(self):
    method edges_size (line 121) | def edges_size(self, es):
    method bond_size (line 125) | def bond_size(self, i, j):
    method node_size (line 131) | def node_size(self, i):
    method neighborhood_size (line 135) | def neighborhood_size(self, nodes):
    method contract_pair_cost (line 145) | def contract_pair_cost(self, i, j):
    method neighborhood_compress_cost (line 151) | def neighborhood_compress_cost(self, chi, nodes):
    method total_node_size (line 187) | def total_node_size(self):
    method output_nodes (line 191) | def output_nodes(self):
    method neighbors (line 195) | def neighbors(self, i):
    method neighbor_edges (line 201) | def neighbor_edges(self, i):
    method has_node (line 211) | def has_node(self, i):
    method get_node (line 215) | def get_node(self, i):
    method get_edge (line 219) | def get_edge(self, e):
    method has_edge (line 223) | def has_edge(self, e):
    method next_node (line 227) | def next_node(self):
    method add_node (line 236) | def add_node(self, inds, node=None):
    method remove_node (line 252) | def remove_node(self, i):
    method remove_edge (line 261) | def remove_edge(self, e):
    method contract (line 267) | def contract(self, i, j, node=None):
    method compress (line 279) | def compress(self, chi, edges=None):
    method compute_contracted_inds (line 302) | def compute_contracted_inds(self, nodes):
    method candidate_contraction_size (line 313) | def candidate_contraction_size(self, i, j, chi=None):
    method all_shortest_distances (line 338) | def all_shortest_distances(
    method all_shortest_distances_condensed (line 390) | def all_shortest_distances_condensed(
    method simple_distance (line 409) | def simple_distance(self, region, p=2):
    method simple_closeness (line 438) | def simple_closeness(self, p=0.75, mu=0.5):
    method simple_centrality (line 494) | def simple_centrality(self, r=None, smoothness=2, **closeness_opts):
    method compute_loops (line 539) | def compute_loops(self, start=None, max_loop_length=None):
    method get_laplacian (line 607) | def get_laplacian(self):
    method get_resistance_distances (line 621) | def get_resistance_distances(self):
    method resistance_centrality (line 635) | def resistance_centrality(self, rescale=True):
    method to_networkx (line 645) | def to_networkx(H, as_tree_leaves=None):
    method compute_weights (line 709) | def compute_weights(
    method __repr__ (line 743) | def __repr__(self):
  function get_hypergraph (line 747) | def get_hypergraph(inputs, output=None, size_dict=None, accel=False):
  function calc_edge_weight (line 764) | def calc_edge_weight(ix, size_dict, scale="log"):
  function calc_edge_weight_float (line 780) | def calc_edge_weight_float(ix, size_dict, scale="log"):
  function calc_node_weight (line 796) | def calc_node_weight(term, size_dict, scale="linear"):
  function calc_node_weight_float (line 813) | def calc_node_weight_float(term, size_dict, scale="linear"):
  class LineGraph (line 830) | class LineGraph:
    method __init__ (line 833) | def __init__(self, inputs, output):
    method to_gr_str (line 849) | def to_gr_str(self):
    method to_gr_file (line 855) | def to_gr_file(self, fname):
    method to_cnf_str (line 860) | def to_cnf_str(self):
    method to_cnf_file (line 866) | def to_cnf_file(self, fname):
  function popcount (line 875) | def popcount(x):
  function popcount (line 886) | def popcount(x):
  function dict_affine_renorm (line 890) | def dict_affine_renorm(d):

FILE: cotengra/hyperoptimizers/_param_mapping.py
  class LCBOptimizer (line 13) | class LCBOptimizer:
    method __init__ (line 19) | def __init__(self, options, exploration=1.0, temperature=1.0, seed=None):
    method ask (line 30) | def ask(self):
    method tell (line 53) | def tell(self, option, score):
  class Param (line 60) | class Param:
    method __init__ (line 65) | def __init__(self, name):
    method get_raw_bounds (line 69) | def get_raw_bounds(self):
    method convert_raw (line 72) | def convert_raw(self, vi):
  class ParamFloat (line 76) | class ParamFloat(Param):
    method __init__ (line 77) | def __init__(self, min, max, **kwargs):
    method convert_raw (line 82) | def convert_raw(self, x):
  class ParamFloatExp (line 90) | class ParamFloatExp(ParamFloat):
    method __init__ (line 93) | def __init__(self, min, max, power=0.5, **kwargs):
    method convert_raw (line 105) | def convert_raw(self, x):
  class ParamInt (line 112) | class ParamInt(Param):
    method __init__ (line 113) | def __init__(self, min, max, **kwargs):
    method convert_raw (line 118) | def convert_raw(self, x):
  class ParamString (line 126) | class ParamString(Param):
    method __init__ (line 127) | def __init__(self, options, name):
    method convert_raw (line 132) | def convert_raw(self, x):
  class ParamBool (line 137) | class ParamBool(Param):
    method __init__ (line 138) | def __init__(self, name):
    method convert_raw (line 143) | def convert_raw(self, x):
  function build_params (line 147) | def build_params(space, exponential_param_power=None):
  function convert_raw (line 185) | def convert_raw(params, x):
  function num_params (line 211) | def num_params(params):
  function generate_lhs_points (line 216) | def generate_lhs_points(ndim, n, rng):

FILE: cotengra/hyperoptimizers/hyper.py
  function get_default_hq_methods (line 30) | def get_default_hq_methods():
  function get_default_optlib_eco (line 45) | def get_default_optlib_eco():
  function get_default_optlib (line 58) | def get_default_optlib():
  function get_hyper_space (line 77) | def get_hyper_space():
  function get_hyper_constants (line 81) | def get_hyper_constants():
  class HyperOptLib (line 85) | class HyperOptLib:
    method setup (line 92) | def setup(self, methods, space, optimizer=None, **kwargs):
    method get_setting (line 109) | def get_setting(self):
    method report_result (line 120) | def report_result(self, setting, trial, score):
    method cleanup (line 134) | def cleanup(self):
  function register_hyper_optlib (line 142) | def register_hyper_optlib(name, cls, defaults=None):
  function register_hyper_function (line 156) | def register_hyper_function(name, ssa_func, space, constants=None):
  function list_hyper_functions (line 177) | def list_hyper_functions():
  function base_trial_fn (line 182) | def base_trial_fn(inputs, output, size_dict, method, **kwargs):
  class TrialSetObjective (line 200) | class TrialSetObjective:
    method __init__ (line 201) | def __init__(self, trial_fn, objective):
    method __call__ (line 205) | def __call__(self, *args, **kwargs):
  class TrialConvertTree (line 211) | class TrialConvertTree:
    method __init__ (line 212) | def __init__(self, trial_fn, cls):
    method __call__ (line 216) | def __call__(self, *args, **kwargs):
  class TrialTreeMulti (line 226) | class TrialTreeMulti:
    method __init__ (line 227) | def __init__(self, trial_fn, varmults, numconfigs):
    method __call__ (line 232) | def __call__(self, *args, **kwargs):
  class SlicedTrialFn (line 245) | class SlicedTrialFn:
    method __init__ (line 246) | def __init__(self, trial_fn, **opts):
    method __call__ (line 250) | def __call__(self, *args, **kwargs):
  class SimulatedAnnealingTrialFn (line 265) | class SimulatedAnnealingTrialFn:
    method __init__ (line 266) | def __init__(self, trial_fn, **opts):
    method __call__ (line 270) | def __call__(self, *args, **kwargs):
  class ReconfTrialFn (line 282) | class ReconfTrialFn:
    method __init__ (line 283) | def __init__(self, trial_fn, forested=False, parallel=False, **opts):
    method __call__ (line 289) | def __call__(self, *args, **kwargs):
  class SlicedReconfTrialFn (line 311) | class SlicedReconfTrialFn:
    method __init__ (line 312) | def __init__(self, trial_fn, forested=False, parallel=False, **opts):
    method __call__ (line 318) | def __call__(self, *args, **kwargs):
  class CompressedReconfTrial (line 340) | class CompressedReconfTrial:
    method __init__ (line 341) | def __init__(self, trial_fn, minimize=None, **opts):
    method __call__ (line 346) | def __call__(self, *args, **kwargs):
  class ComputeScore (line 353) | class ComputeScore:
    method __init__ (line 358) | def __init__(
    method __call__ (line 378) | def __call__(self, *args, **kwargs):
  function progress_description (line 421) | def progress_description(best, info="concise"):
  class HyperOptimizer (line 431) | class HyperOptimizer(PathOptimizer):
    method __init__ (line 497) | def __init__(
    method minimize (line 588) | def minimize(self):
    method minimize (line 592) | def minimize(self, minimize):
    method parallel (line 600) | def parallel(self):
    method parallel (line 604) | def parallel(self, parallel):
    method tree (line 614) | def tree(self):
    method path (line 618) | def path(self):
    method setup (line 621) | def setup(self, inputs, output, size_dict):
    method _maybe_cancel_futures (line 665) | def _maybe_cancel_futures(self):
    method _maybe_report_result (line 671) | def _maybe_report_result(self, setting, trial):
    method _gen_results (line 703) | def _gen_results(self, repeats, trial_fn, trial_args):
    method _get_and_report_next_future (line 721) | def _get_and_report_next_future(self):
    method _gen_results_parallel (line 733) | def _gen_results_parallel(self, repeats, trial_fn, trial_args):
    method _search (line 757) | def _search(self, inputs, output, size_dict):
    method search (line 836) | def search(self, inputs, output, size_dict):
    method get_tree (line 847) | def get_tree(self):
    method __call__ (line 851) | def __call__(self, inputs, output, size_dict, memory_limit=None):
    method get_trials (line 860) | def get_trials(self, sort=None):
    method print_trials (line 892) | def print_trials(self, sort=None):
    method to_df (line 911) | def to_df(self):
    method to_dfs_parametrized (line 930) | def to_dfs_parametrized(self):
  class ReusableHyperOptimizer (line 962) | class ReusableHyperOptimizer(ReusableOptimizer):
    method _get_path_relevant_opts (line 993) | def _get_path_relevant_opts(self):
    method _get_suboptimizer (line 1011) | def _get_suboptimizer(self):
    method _deconstruct_tree (line 1014) | def _deconstruct_tree(self, opt, tree):
    method _reconstruct_tree (line 1022) | def _reconstruct_tree(self, inputs, output, size_dict, con):
  class HyperCompressedOptimizer (line 1037) | class HyperCompressedOptimizer(HyperOptimizer):
    method __init__ (line 1099) | def __init__(
  class ReusableHyperCompressedOptimizer (line 1124) | class ReusableHyperCompressedOptimizer(ReusableHyperOptimizer):
    method __init__ (line 1154) | def __init__(
    method _get_suboptimizer (line 1183) | def _get_suboptimizer(self):
    method _deconstruct_tree (line 1186) | def _deconstruct_tree(self, opt, tree):
    method _reconstruct_tree (line 1193) | def _reconstruct_tree(self, inputs, output, size_dict, con):
  class HyperMultiOptimizer (line 1203) | class HyperMultiOptimizer(HyperOptimizer):

FILE: cotengra/hyperoptimizers/hyper_cmaes.py
  class HyperCMAESSampler (line 16) | class HyperCMAESSampler:
    method __init__ (line 17) | def __init__(
    method ask (line 51) | def ask(self):
    method tell (line 62) | def tell(self, trial_number, value):
  class CMAESOptLib (line 72) | class CMAESOptLib(HyperOptLib):
    method setup (line 77) | def setup(
    method get_setting (line 105) | def get_setting(self):
    method report_result (line 115) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_es.py
  function _reflect (line 19) | def _reflect(x):
  class SteadyStateES (line 30) | class SteadyStateES:
    method __init__ (line 95) | def __init__(
    method _init_state (line 162) | def _init_state(self):
    method ask (line 191) | def ask(self):
    method tell (line 254) | def tell(self, trial_number, score):
    method _restart (line 333) | def _restart(self):
  class ESOptLib (line 354) | class ESOptLib(HyperOptLib):
    method setup (line 357) | def setup(
    method get_setting (line 468) | def get_setting(self):
    method report_result (line 479) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_neldermead.py
  function _clip (line 23) | def _clip(x, lo=-1.0, hi=1.0):
  function clamp (line 28) | def clamp(xs, lo=-1.0, hi=1.0):
  class _NMCore (line 33) | class _NMCore:
    method __init__ (line 86) | def __init__(
    method converged (line 146) | def converged(self):
    method best_vertex (line 151) | def best_vertex(self):
    method best_score (line 155) | def best_score(self):
    method _centroid_of (line 158) | def _centroid_of(self, vertices):
    method _reflect (line 162) | def _reflect(self, centroid, worst):
    method _expand (line 170) | def _expand(self, centroid, reflected):
    method _contract_outside_pt (line 178) | def _contract_outside_pt(self, centroid, reflected):
    method _contract_inside_pt (line 186) | def _contract_inside_pt(self, centroid, worst):
    method _shrink_vertex (line 194) | def _shrink_vertex(self, best, vertex):
    method _sort_simplex (line 202) | def _sort_simplex(self):
    method _simplex_diameter (line 207) | def _simplex_diameter(self):
    method _diameter_converged (line 221) | def _diameter_converged(self):
    method _initialize_simplex (line 237) | def _initialize_simplex(self, center, scales):
    method _enqueue (line 252) | def _enqueue(self, x, role):
    method _try_advance (line 259) | def _try_advance(self):
    method _try_advance_init (line 271) | def _try_advance_init(self):
    method inject_vertex (line 296) | def inject_vertex(self, x, score):
    method _begin_reflect (line 351) | def _begin_reflect(self):
    method _try_advance_reflect (line 371) | def _try_advance_reflect(self):
    method _try_advance_expand (line 418) | def _try_advance_expand(self):
    method _try_advance_contract (line 441) | def _try_advance_contract(self):
    method _begin_shrink (line 469) | def _begin_shrink(self):
    method _try_advance_shrink (line 476) | def _try_advance_shrink(self):
    method ask (line 497) | def ask(self):
    method tell (line 510) | def tell(self, token, score):
  class HyperNelderMeadSampler (line 531) | class HyperNelderMeadSampler:
    method __init__ (line 580) | def __init__(
    method _make_core (line 652) | def _make_core(self, center, scales):
    method _ask_filler (line 667) | def _ask_filler(self):
    method ask (line 694) | def ask(self):
    method tell (line 743) | def tell(self, trial_number, score):
  class NelderMeadOptLib (line 811) | class NelderMeadOptLib(HyperOptLib):
    method setup (line 814) | def setup(
    method get_setting (line 919) | def get_setting(self):
    method report_result (line 930) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_nevergrad.py
  function convert_param_to_nevergrad (line 6) | def convert_param_to_nevergrad(param):
  function get_methods_space (line 23) | def get_methods_space(methods):
  function convert_to_nevergrad_space (line 29) | def convert_to_nevergrad_space(method, space):
  class NevergradOptLib (line 37) | class NevergradOptLib(HyperOptLib):
    method setup (line 40) | def setup(
    method get_setting (line 123) | def get_setting(self):
    method report_result (line 134) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_optuna.py
  function make_getter (line 8) | def make_getter(name, param):
  function make_retriever (line 28) | def make_retriever(methods, space):
  class OptunaOptLib (line 57) | class OptunaOptLib(HyperOptLib):
    method setup (line 60) | def setup(
    method get_setting (line 80) | def get_setting(self):
    method report_result (line 96) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_random.py
  function sample_bool (line 10) | def sample_bool(rng):
  function sample_int (line 14) | def sample_int(rng, low, high):
  function sample_option (line 18) | def sample_option(rng, options):
  function sample_uniform (line 22) | def sample_uniform(rng, low, high):
  function sample_loguniform (line 26) | def sample_loguniform(rng, low, high):
  class RandomSpace (line 30) | class RandomSpace:
    method __init__ (line 31) | def __init__(self, space, seed=None):
    method sample (line 62) | def sample(self):
  class LHSRandomSpace (line 66) | class LHSRandomSpace:
    method __init__ (line 85) | def __init__(self, space, n, seed=None):
    method _stratify_float (line 127) | def _stratify_float(self, n, lo, hi):
    method _stratify_int (line 139) | def _stratify_int(self, n, lo, hi):
    method _stratify_categorical (line 157) | def _stratify_categorical(self, n, options):
    method sample (line 170) | def sample(self):
  class RandomSampler (line 179) | class RandomSampler:
    method __init__ (line 196) | def __init__(self, methods, spaces, n_samples=None, seed=None):
    method ask (line 209) | def ask(self):
  class RandomOptLib (line 216) | class RandomOptLib(HyperOptLib):
    method setup (line 226) | def setup(
    method get_setting (line 261) | def get_setting(self):
    method report_result (line 265) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_sbplx.py
  class HyperSbplxSampler (line 19) | class HyperSbplxSampler:
    method __init__ (line 89) | def __init__(
    method _partition_dims (line 188) | def _partition_dims(self):
    method _partition_greedy (line 210) | def _partition_greedy(self, order):
    method _partition_goodness (line 227) | def _partition_goodness(self, order, magnitudes):
    method _clamp_scale_factor (line 267) | def _clamp_scale_factor(self, factor):
    method _start_cycle (line 270) | def _start_cycle(self):
    method _start_sub_nm (line 279) | def _start_sub_nm(self):
    method _cycle_converged (line 302) | def _cycle_converged(self):
    method _embed_sub_vector (line 315) | def _embed_sub_vector(self, sub_x):
    method _ask_filler (line 324) | def _ask_filler(self):
    method _rescale_step (line 343) | def _rescale_step(self, step, factor, minimum=0.0):
    method _reset_cycle_state (line 360) | def _reset_cycle_state(self):
    method _restart (line 370) | def _restart(self, mode):
    method ask (line 408) | def ask(self):
    method tell (line 472) | def tell(self, trial_number, score):
    method _finish_subspace (line 527) | def _finish_subspace(self):
    method _update_steps_after_cycle (line 544) | def _update_steps_after_cycle(self):
    method _finish_cycle (line 581) | def _finish_cycle(self):
  class SbplxOptLib (line 616) | class SbplxOptLib(HyperOptLib):
    method setup (line 621) | def setup(
    method get_setting (line 746) | def get_setting(self):
    method report_result (line 757) | def report_result(self, setting, trial, score):

FILE: cotengra/hyperoptimizers/hyper_skopt.py
  function convert_param_to_skopt (line 6) | def convert_param_to_skopt(param, name):
  function get_methods_space (line 29) | def get_methods_space(methods):
  function convert_to_skopt_space (line 35) | def convert_to_skopt_space(method, space):
  class SkoptOptLib (line 42) | class SkoptOptLib(HyperOptLib):
    method setup (line 45) | def setup(
    method get_setting (line 111) | def get_setting(self):
    method report_result (line 129) | def report_result(self, setting, trial, score):

FILE: cotengra/interface.py
  function register_preset (line 26) | def register_preset(
  function preset_to_optimizer (line 74) | def preset_to_optimizer(preset):
  function can_hash_optimize (line 91) | def can_hash_optimize(cls):
  function identity (line 98) | def identity(x):
  function list_hash_prepare (line 105) | def list_hash_prepare(optimize):
  function hash_prepare_optimize (line 112) | def hash_prepare_optimize(optimize):
  function hash_contraction (line 125) | def hash_contraction(inputs, output, size_dict, optimize, **kwargs):
  function normalize_input (line 136) | def normalize_input(
  function _find_path_explicit_path (line 174) | def _find_path_explicit_path(inputs, output, size_dict, optimize):
  function _find_path_optimizer (line 186) | def _find_path_optimizer(inputs, output, size_dict, optimize, **kwargs):
  function _find_path_preset (line 190) | def _find_path_preset(inputs, output, size_dict, optimize, **kwargs):
  function _find_path_tree (line 195) | def _find_path_tree(inputs, output, size_dict, optimize, **kwargs):
  function find_path (line 199) | def find_path(inputs, output, size_dict, optimize="auto", **kwargs):
  function array_contract_path (line 242) | def array_contract_path(
  function _find_tree_explicit (line 310) | def _find_tree_explicit(inputs, output, size_dict, optimize, **kwargs):
  function _find_tree_optimizer_search (line 321) | def _find_tree_optimizer_search(inputs, output, size_dict, optimize, **k...
  function _find_tree_optimizer_basic (line 325) | def _find_tree_optimizer_basic(inputs, output, size_dict, optimize, **kw...
  function _find_tree_preset (line 330) | def _find_tree_preset(inputs, output, size_dict, optimize, **kwargs):
  function _find_tree_tree (line 343) | def _find_tree_tree(inputs, output, size_dict, optimize, **kwargs):
  function find_tree (line 351) | def find_tree(inputs, output, size_dict, optimize="auto", **kwargs):
  function array_contract_tree (line 394) | def array_contract_tree(
  class Variadic (line 461) | class Variadic:
    method __init__ (line 468) | def __init__(self, fn, **kwargs):
    method __call__ (line 472) | def __call__(self, *arrays, **kwargs):
  class Via (line 476) | class Via:
    method __init__ (line 483) | def __init__(self, fn, convert_in, convert_out):
    method __call__ (line 488) | def __call__(self, *arrays, **kwargs):
  class WithBackend (line 494) | class WithBackend:
    method __init__ (line 501) | def __init__(self, fn):
    method __call__ (line 504) | def __call__(self, *args, backend=None, **kwargs):
  function _array_contract_expression_with_constants (line 511) | def _array_contract_expression_with_constants(
  function _wrap_strip_exponent_final (line 577) | def _wrap_strip_exponent_final(fn):
  function _build_expression (line 585) | def _build_expression(
  function array_contract_expression (line 673) | def array_contract_expression(
  function array_contract (line 805) | def array_contract(
  function einsum_tree (line 875) | def einsum_tree(
  function einsum_expression (line 925) | def einsum_expression(
  function einsum (line 1039) | def einsum(
  function ncon (line 1111) | def ncon(arrays, indices, **kwargs):

FILE: cotengra/nodeops.py
  class NodeOpsFrozenset (line 5) | class NodeOpsFrozenset:
    method copy (line 10) | def copy(self):
    method node_from_single (line 25) | def node_from_single(x):
    method node_supremum (line 30) | def node_supremum(size):
    method node_get_single_el (line 35) | def node_get_single_el(node):
    method node_tie_breaker (line 40) | def node_tie_breaker(x):
    method is_valid_node (line 48) | def is_valid_node(node):
    method node_union (line 61) | def node_union(x, y):
    method node_union_it (line 66) | def node_union_it(bs):
    method node_issubset (line 74) | def node_issubset(x, y):
    method is_leaf (line 79) | def is_leaf(node):
    method is_supremum (line 83) | def is_supremum(node, N):
  class BitSetInt (line 87) | class BitSetInt(int):
    method __new__ (line 102) | def __new__(cls, it=0):
    method __hash__ (line 109) | def __hash__(self):
    method infimum (line 113) | def infimum(cls):
    method supremum (line 117) | def supremum(cls, size):
    method __len__ (line 120) | def __len__(self):
    method iter_sparse (line 123) | def iter_sparse(self):
    method iter_dense (line 130) | def iter_dense(self):
    method iter_numpy (line 133) | def iter_numpy(self):
    method __iter__ (line 143) | def __iter__(self):
    method __contains__ (line 163) | def __contains__(self, i):
    method __sub__ (line 166) | def __sub__(self, i):
    method difference (line 169) | def difference(self, i):
    method __and__ (line 172) | def __and__(self, i):
    method intersection (line 175) | def intersection(self, i):
    method __or__ (line 178) | def __or__(self, i):
    method union (line 181) | def union(*it):
    method issubset (line 184) | def issubset(self, other):
    method __repr__ (line 192) | def __repr__(self):
  class NodeOpsBitSetInt (line 196) | class NodeOpsBitSetInt:
    method copy (line 201) | def copy(self):
    method node_from_single (line 210) | def node_from_single(x):
    method node_supremum (line 214) | def node_supremum(size):
    method node_get_single_el (line 218) | def node_get_single_el(node):
    method node_tie_breaker (line 222) | def node_tie_breaker(x):
    method is_valid_node (line 226) | def is_valid_node(node):
    method node_union (line 233) | def node_union(x, y):
    method node_union_it (line 237) | def node_union_it(bs):
    method node_issubset (line 244) | def node_issubset(x, y):
    method is_leaf (line 248) | def is_leaf(node):
    method is_supremum (line 252) | def is_supremum(node, N):
  class NodeOpsSSA (line 256) | class NodeOpsSSA:
    method __init__ (line 259) | def __init__(self, N):
    method copy (line 263) | def copy(self):
    method get_next_ssa (line 269) | def get_next_ssa(self):
    method node_size (line 276) | def node_size(self, node):
    method node_from_single (line 279) | def node_from_single(self, x):
    method node_supremum (line 282) | def node_supremum(self, size):
    method node_get_single_el (line 285) | def node_get_single_el(self, node):
    method node_tie_breaker (line 288) | def node_tie_breaker(self, x):
    method is_valid_node (line 291) | def is_valid_node(self, node):
    method node_union (line 294) | def node_union(self, x, y):
    method node_union_it (line 297) | def node_union_it(self, bs):
    method new_node_for_union (line 300) | def new_node_for_union(self, nodes):
    method node_from_seq (line 308) | def node_from_seq(self, seq):
    method new_node_for_seq (line 311) | def new_node_for_seq(self, seq):
    method node_issubset (line 320) | def node_issubset(self, x, y):
    method is_leaf (line 323) | def is_leaf(self, node):
    method is_supremum (line 326) | def is_supremum(self, node, N):
  function get_nodeops (line 334) | def get_nodeops(node_type_str: str, N=None):

FILE: cotengra/oe.py
  class PathOptimizer (line 10) | class PathOptimizer:
  function get_path_fn (line 13) | def get_path_fn(*_, **__):
  function register_path_fn (line 16) | def register_path_fn(*_, **__):

FILE: cotengra/parallel.py
  function _remember_auto_backend (line 39) | def _remember_auto_backend(backend):
  function choose_default_num_workers (line 60) | def choose_default_num_workers():
  function get_pool (line 70) | def get_pool(n_workers=None, maybe_create=False, backend=None):
  function _infer_backed_cached (line 101) | def _infer_backed_cached(pool_class):
  function _infer_backend (line 124) | def _infer_backend(pool):
  function get_n_workers (line 129) | def get_n_workers(pool=None):
  function parse_parallel_arg (line 163) | def parse_parallel_arg(parallel):
  function set_parallel_backend (line 220) | def set_parallel_backend(backend):
  function maybe_leave_pool (line 230) | def maybe_leave_pool(pool):
  function maybe_rejoin_pool (line 236) | def maybe_rejoin_pool(is_worker, pool):
  function _worker_call (line 242) | def _worker_call(fn, args, kwargs):
  function submit (line 255) | def submit(pool, fn, *args, **kwargs):
  function scatter (line 269) | def scatter(pool, data):
  function can_scatter (line 276) | def can_scatter(pool):
  function should_nest (line 281) | def should_nest(pool):
  function get_loky_get_reusable_executor (line 295) | def get_loky_get_reusable_executor():
  class CachedProcessPoolExecutor (line 306) | class CachedProcessPoolExecutor:
    method __init__ (line 307) | def __init__(self):
    method __call__ (line 313) | def __call__(self, n_workers=None):
    method is_initialized (line 328) | def is_initialized(self):
    method shutdown (line 331) | def shutdown(self):
    method __del__ (line 338) | def __del__(self):
  function _get_process_pool_cf (line 345) | def _get_process_pool_cf(n_workers=None):
  class CachedThreadPoolExecutor (line 349) | class CachedThreadPoolExecutor:
    method __init__ (line 350) | def __init__(self):
    method __call__ (line 356) | def __call__(self, n_workers=None):
    method is_initialized (line 370) | def is_initialized(self):
    method shutdown (line 373) | def shutdown(self):
    method __del__ (line 380) | def __del__(self):
  function _get_thread_pool_cf (line 387) | def _get_thread_pool_cf(n_workers=None):
  function _get_pool_dask (line 394) | def _get_pool_dask(n_workers=None, maybe_create=False):
  function _maybe_leave_pool_dask (line 459) | def _maybe_leave_pool_dask():
  function _rejoin_pool_dask (line 470) | def _rejoin_pool_dask():
  function get_ray (line 480) | def get_ray():
  class RayFuture (line 487) | class RayFuture:
    method __init__ (line 492) | def __init__(self, obj):
    method result (line 496) | def result(self, timeout=None):
    method done (line 499) | def done(self):
    method cancel (line 504) | def cancel(self):
  function _unpack_futures_tuple (line 509) | def _unpack_futures_tuple(x):
  function _unpack_futures_list (line 513) | def _unpack_futures_list(x):
  function _unpack_futures_dict (line 517) | def _unpack_futures_dict(x):
  function _unpack_futures_identity (line 521) | def _unpack_futures_identity(x):
  function _unpack_futures (line 536) | def _unpack_futures(x):
  function get_remote_fn (line 547) | def get_remote_fn(fn, **remote_opts):
  function get_fn_as_remote_object (line 556) | def get_fn_as_remote_object(fn):
  function get_deploy (line 562) | def get_deploy(**remote_opts):
  class RayExecutor (line 576) | class RayExecutor:
    method __init__ (line 579) | def __init__(self, *args, default_remote_opts=None, **kwargs):
    method _maybe_inject_remote_opts (line 588) | def _maybe_inject_remote_opts(self, remote_opts=None):
    method submit (line 597) | def submit(self, fn, *args, pure=False, remote_opts=None, **kwargs):
    method map (line 615) | def map(self, func, *iterables, remote_opts=None):
    method scatter (line 623) | def scatter(self, data):
    method shutdown (line 630) | def shutdown(self):
  function _get_pool_ray (line 640) | def _get_pool_ray(n_workers=None, maybe_create=False):

FILE: cotengra/pathfinders/path_basic.py
  function is_simplifiable (line 18) | def is_simplifiable(legs, appearances):
  function compute_simplified (line 31) | def compute_simplified(legs, appearances):
  function compute_contracted (line 58) | def compute_contracted(ilegs, jlegs, appearances):
  function compute_size (line 97) | def compute_size(legs, sizes):
  function compute_flops (line 105) | def compute_flops(ilegs, jlegs, sizes):
  function compute_con_cost_flops (line 118) | def compute_con_cost_flops(
  function compute_con_cost_max (line 140) | def compute_con_cost_max(
  function compute_con_cost_size (line 162) | def compute_con_cost_size(
  function compute_con_cost_write (line 184) | def compute_con_cost_write(
  function compute_con_cost_combo (line 207) | def compute_con_cost_combo(
  function compute_con_cost_limit (line 237) | def compute_con_cost_limit(
  function parse_minimize_for_optimal (line 271) | def parse_minimize_for_optimal(minimize):
  class ContractionProcessor (line 316) | class ContractionProcessor:
    method __init__ (line 334) | def __init__(
    method copy (line 378) | def copy(self):
    method neighbors (line 392) | def neighbors(self, i):
    method neighbors_limit (line 411) | def neighbors_limit(self, i, max_neighbors):
    method print_current_terms (line 443) | def print_current_terms(self):
    method remove_ix (line 448) | def remove_ix(self, ix):
    method pop_node (line 457) | def pop_node(self, i):
    method add_node (line 473) | def add_node(self, legs):
    method check (line 484) | def check(self):
    method contract_nodes (line 493) | def contract_nodes(self, i, j, new_legs=None):
    method simplify_batch (line 510) | def simplify_batch(self):
    method simplify_single_terms (line 522) | def simplify_single_terms(self):
    method simplify_scalars (line 532) | def simplify_scalars(self):
    method simplify_hadamard (line 558) | def simplify_hadamard(self):
    method simplify (line 576) | def simplify(self):
    method subgraphs (line 587) | def subgraphs(self):
    method optimize_greedy (line 607) | def optimize_greedy(
    method optimize_optimal_connected (line 698) | def optimize_optimal_connected(
    method optimize_optimal (line 815) | def optimize_optimal(
    method optimize_remaining_by_size (line 827) | def optimize_remaining_by_size(self):
  function linear_to_ssa (line 855) | def linear_to_ssa(path, N=None):
  function ssa_to_linear (line 877) | def ssa_to_linear(ssa_path, N=None):
  function edge_path_to_ssa (line 902) | def edge_path_to_ssa(edge_path, inputs):
  function edge_path_to_linear (line 960) | def edge_path_to_linear(edge_path, inputs):
  function is_ssa_path (line 980) | def is_ssa_path(path, nterms):
  function optimize_simplify (line 995) | def optimize_simplify(inputs, output, size_dict, use_ssa=False):
  function optimize_greedy (line 1029) | def optimize_greedy(
  function optimize_random_greedy_track_flops (line 1103) | def optimize_random_greedy_track_flops(
  function optimize_optimal (line 1242) | def optimize_optimal(
  class EnsureInputsOutputAreSequence (line 1329) | class EnsureInputsOutputAreSequence:
    method __init__ (line 1330) | def __init__(self, f):
    method __call__ (line 1333) | def __call__(self, inputs, output, *args, **kwargs):
  function get_optimize_greedy (line 1342) | def get_optimize_greedy(accel="auto"):
  function get_optimize_random_greedy_track_flops (line 1360) | def get_optimize_random_greedy_track_flops(accel="auto"):
  class GreedyOptimizer (line 1377) | class GreedyOptimizer(PathOptimizer):
    method __init__ (line 1390) | def __init__(
    method maybe_update_defaults (line 1404) | def maybe_update_defaults(self, **kwargs):
    method ssa_path (line 1415) | def ssa_path(self, inputs, output, size_dict, **kwargs):
    method search (line 1424) | def search(self, inputs, output, size_dict, **kwargs):
    method __call__ (line 1432) | def __call__(self, inputs, output, size_dict, **kwargs):
  class RandomGreedyOptimizer (line 1442) | class RandomGreedyOptimizer(PathOptimizer):
    method __init__ (line 1505) | def __init__(
    method maybe_update_defaults (line 1548) | def maybe_update_defaults(self, **kwargs):
    method ssa_path (line 1559) | def ssa_path(self, inputs, output, size_dict, **kwargs):
    method search (line 1602) | def search(self, inputs, output, size_dict, **kwargs):
    method __call__ (line 1619) | def __call__(self, inputs, output, size_dict, **kwargs):
  class ReusableRandomGreedyOptimizer (line 1629) | class ReusableRandomGreedyOptimizer(ReusableOptimizer):
    method _get_path_relevant_opts (line 1632) | def _get_path_relevant_opts(self):
    method _get_suboptimizer (line 1645) | def _get_suboptimizer(self):
    method _deconstruct_tree (line 1648) | def _deconstruct_tree(self, opt, tree):
    method _reconstruct_tree (line 1656) | def _reconstruct_tree(self, inputs, output, size_dict, con):
  function get_optimize_optimal (line 1669) | def get_optimize_optimal(accel="auto"):
  class OptimalOptimizer (line 1686) | class OptimalOptimizer(PathOptimizer):
    method __init__ (line 1699) | def __init__(
    method maybe_update_defaults (line 1713) | def maybe_update_defaults(self, **kwargs):
    method ssa_path (line 1724) | def ssa_path(self, inputs, output, size_dict, **kwargs):
    method search (line 1733) | def search(self, inputs, output, size_dict, **kwargs):
    method __call__ (line 1741) | def __call__(self, inputs, output, size_dict, **kwargs):

FILE: cotengra/pathfinders/path_compressed.py
  class MiniTree (line 12) | class MiniTree:
    method __init__ (line 19) | def __init__(self):
    method copy (line 26) | def copy(self):
    method add (line 34) | def add(self, p, l, r):
    method contract (line 50) | def contract(self, p):
    method __repr__ (line 66) | def __repr__(self):
  class EmptyMiniTree (line 77) | class EmptyMiniTree:
    method __init__ (line 80) | def __init__(self, hgi, hgf):
    method copy (line 118) | def copy(self):
    method contract (line 123) | def contract(self, p):
  class Node (line 140) | class Node:
    method __init__ (line 145) | def __init__(self, hg, plr, chi, tracker, compress_late=False):
    method first (line 153) | def first(cls, inputs, output, size_dict, minimize):
    method next (line 179) | def next(self, p, l, r):
    method graph_key (line 213) | def graph_key(self):
    method __repr__ (line 216) | def __repr__(self):
  function ssa_path_to_bit_path (line 220) | def ssa_path_to_bit_path(path):
  function bit_path_to_ssa_path (line 234) | def bit_path_to_ssa_path(bitpath):
  class WindowedOptimizer (line 244) | class WindowedOptimizer:
    method __init__ (line 247) | def __init__(
    method tracker (line 264) | def tracker(self):
    method plot_size_footprint (line 269) | def plot_size_footprint(self, figsize=(8, 3)):
    method optimize_window (line 291) | def optimize_window(
    method refine (line 377) | def refine(
    method simulated_anneal (line 421) | def simulated_anneal(
    method get_ssa_path (line 531) | def get_ssa_path(self):

FILE: cotengra/pathfinders/path_compressed_greedy.py
  function _binary_combine (line 20) | def _binary_combine(func, x, y):
  class GreedyCompressed (line 33) | class GreedyCompressed:
    method __init__ (line 72) | def __init__(
    method _score (line 104) | def _score(self, i1, i2):
    method get_ssa_path (line 140) | def get_ssa_path(self, inputs, output, size_dict):
    method search (line 201) | def search(self, inputs, output, size_dict):
    method __call__ (line 209) | def __call__(self, inputs, output, size_dict, memory_limit=None):
  function greedy_compressed (line 215) | def greedy_compressed(inputs, output, size_dict, memory_limit=None, **kw...
  function trial_greedy_compressed (line 223) | def trial_greedy_compressed(inputs, output, size_dict, **kwargs):
  class GreedySpan (line 268) | class GreedySpan:
    method __init__ (line 296) | def __init__(
    method get_ssa_path (line 323) | def get_ssa_path(self, inputs, output, size_dict):
    method search (line 428) | def search(self, inputs, output, size_dict):
    method __call__ (line 436) | def __call__(self, inputs, output, size_dict, memory_limit=None):
  function greedy_span (line 442) | def greedy_span(inputs, output, size_dict, memory_limit=None, **kwargs):
  function trial_greedy_span (line 446) | def trial_greedy_span(inputs, output, size_dict, **kwargs):

FILE: cotengra/pathfinders/path_edgesort.py
  class EdgeSortOptimizer (line 5) | class EdgeSortOptimizer(PathOptimizer):
    method __init__ (line 16) | def __init__(self, reverse=False):
    method search (line 19) | def search(self, inputs, output, size_dict, **kwargs):
    method __call__ (line 32) | def __call__(self, inputs, output, size_dict, **kwargs):

FILE: cotengra/pathfinders/path_flowcutter.py
  class FlowCutterOptimizer (line 18) | class FlowCutterOptimizer(PathOptimizer):
    method __init__ (line 19) | def __init__(
    method run_flowcutter (line 26) | def run_flowcutter(self, file, max_time=None):
    method compute_edge_path (line 45) | def compute_edge_path(self, lg):
    method build_tree (line 50) | def build_tree(self, inputs, output, size_dict, memory_limit=None):
    method __call__ (line 76) | def __call__(self, inputs, output, size_dict, memory_limit=None):
  function optimize_flowcutter (line 80) | def optimize_flowcutter(
  function trial_flowcutter (line 87) | def trial_flowcutter(inputs, output, size_dict, max_time=10, seed=None):

FILE: cotengra/pathfinders/path_greedy.py
  function trial_greedy (line 12) | def trial_greedy(

FILE: cotengra/pathfinders/path_igraph.py
  function oe_to_igraph (line 19) | def oe_to_igraph(
  function igraph_subgraph_find_membership (line 46) | def igraph_subgraph_find_membership(
  function trial_igraph_dendrogram (line 100) | def trial_igraph_dendrogram(
  function trial_spinglass (line 166) | def trial_spinglass(

FILE: cotengra/pathfinders/path_kahypar.py
  function get_kahypar_profile_dir (line 13) | def get_kahypar_profile_dir():
  function to_sparse (line 33) | def to_sparse(hg, weight_nodes="const", weight_edges="log"):
  function kahypar_subgraph_find_membership (line 50) | def kahypar_subgraph_find_membership(

FILE: cotengra/pathfinders/path_labels.py
  function pop_fact (line 12) | def pop_fact(p, parts, n, pop_small_bias, pop_big_bias):
  function labels_partition (line 20) | def labels_partition(

FILE: cotengra/pathfinders/path_quickbb.py
  class QuickBBOptimizer (line 16) | class QuickBBOptimizer(PathOptimizer):
    method __init__ (line 17) | def __init__(self, max_time=10, executable="quickbb_64", seed=None):
    method run_quickbb (line 21) | def run_quickbb(self, fname, outfile, statfile, max_time=None):
    method build_tree (line 77) | def build_tree(self, inputs, output, size_dict):
    method __call__ (line 113) | def __call__(self, inputs, output, size_dict, memory_limit=None):
  function optimize_quickbb (line 117) | def optimize_quickbb(
  function trial_quickbb (line 124) | def trial_quickbb(inputs, output, size_dict, max_time=10, seed=None):

FILE: cotengra/pathfinders/path_random.py
  class RandomOptimizer (line 10) | class RandomOptimizer(PathOptimizer):
    method __init__ (line 21) | def __init__(self, seed=None):
    method __call__ (line 24) | def __call__(self, inputs, outputs, size_dict):
    method search (line 36) | def search(self, inputs, outputs, size_dict):

FILE: cotengra/pathfinders/path_simulated_annealing.py
  function compute_contracted_info (line 19) | def compute_contracted_info(legsa, legsb, appearances, size_dict):
  function linspace_generator (line 71) | def linspace_generator(start, stop, num, log=False):
  function _describe_tree (line 110) | def _describe_tree(tree, info="concise"):
  function _score_tree (line 114) | def _score_tree(scorer, tree, target_size=None, coeff_size_penalty=1.0):
  function _slice_tree_basic (line 125) | def _slice_tree_basic(tree, current_target_size, rng, unslice=1):
  function _slice_tree_reslice (line 134) | def _slice_tree_reslice(tree, current_target_size, rng):
  function _slice_tree_drift (line 138) | def _slice_tree_drift(tree, current_target_size, rng):
  function simulated_anneal_tree (line 152) | def simulated_anneal_tree(
  function _do_anneal (line 380) | def _do_anneal(tree, *args, **kwargs):
  function parallel_temper_tree (line 384) | def parallel_temper_tree(

FILE: cotengra/pathfinders/treedecomp.py
  class TreeDecomposition (line 48) | class TreeDecomposition:
    method __init__ (line 53) | def __init__(self):
  class EliminationOrdering (line 64) | class EliminationOrdering:
    method __init__ (line 69) | def __init__(self):
  function _increment_eo (line 76) | def _increment_eo(td, eo):
  function td_to_eo (line 119) | def td_to_eo(td):
  function td_str_to_tree_decomposition (line 140) | def td_str_to_tree_decomposition(td_str):

FILE: cotengra/plot.py
  function show_and_close (line 10) | def show_and_close(fn):
  function use_neutral_style (line 46) | def use_neutral_style(fn):
  function plot_trials_alt (line 60) | def plot_trials_alt(self, y=None, width=800, height=300):
  function plot_scatter (line 118) | def plot_scatter(
  function plot_trials (line 251) | def plot_trials(
  function plot_scatter_alt (line 272) | def plot_scatter_alt(
  function logxextrapolate (line 310) | def logxextrapolate(p1, p2, x):
  function mapper (line 319) | def mapper(y, x, mn, mx):
  function mapper_cat (line 323) | def mapper_cat(c, x, lookup):
  function plot_parameters_parallel (line 328) | def plot_parameters_parallel(
  function tree_to_networkx (line 452) | def tree_to_networkx(tree):
  function hypergraph_compute_plot_info_G (line 482) | def hypergraph_compute_plot_info_G(
  function rotate (line 610) | def rotate(xy, theta):
  function span (line 624) | def span(xy):
  function massage_pos (line 629) | def massage_pos(pos, nangles=100, flatten=False):
  function layout_pygraphviz (line 648) | def layout_pygraphviz(
  function get_nice_pos (line 713) | def get_nice_pos(
  function plot_tree (line 807) | def plot_tree(
  function plot_tree_ring (line 1089) | def plot_tree_ring(tree, **kwargs):
  function plot_tree_tent (line 1096) | def plot_tree_tent(tree, **kwargs):
  function plot_tree_span (line 1105) | def plot_tree_span(tree, **kwargs):
  function tree_to_df (line 1115) | def tree_to_df(tree):
  function plot_contractions (line 1152) | def plot_contractions(
  function plot_contractions_alt (line 1259) | def plot_contractions_alt(
  function slicefinder_to_df (line 1297) | def slicefinder_to_df(slice_finder, relative_flops=False):
  function plot_slicings (line 1322) | def plot_slicings(
  function plot_slicings_alt (line 1366) | def plot_slicings_alt(
  function plot_hypergraph (line 1401) | def plot_hypergraph(
  function plot_tree_rubberband (line 1560) | def plot_tree_rubberband(
  function plot_tree_flat (line 1643) | def plot_tree_flat(
  function plot_tree_circuit (line 1880) | def plot_tree_circuit(

FILE: cotengra/presets.py
  function estimate_optimal_hardness (line 26) | def estimate_optimal_hardness(inputs):
  class AutoOptimizer (line 44) | class AutoOptimizer(PathOptimizer):
    method __init__ (line 49) | def __init__(
    method _get_optimizer_hyper_threadsafe (line 76) | def _get_optimizer_hyper_threadsafe(self):
    method search (line 89) | def search(self, inputs, output, size_dict, **kwargs):
    method __call__ (line 115) | def __call__(self, inputs, output, size_dict, **kwargs):
  class AutoHQOptimizer (line 133) | class AutoHQOptimizer(AutoOptimizer):
    method __init__ (line 140) | def __init__(self, **kwargs):
  function get_auto_optimizer (line 154) | def get_auto_optimizer():
  function auto_optimize (line 159) | def auto_optimize(inputs, output, size_dict, **kwargs):
  function auto_optimize_tree (line 164) | def auto_optimize_tree(inputs, output, size_dict, **kwargs):
  function get_auto_hq_optimizer (line 171) | def get_auto_hq_optimizer():
  function auto_hq_optimize (line 176) | def auto_hq_optimize(inputs, output, size_dict, **kwargs):
  function auto_optimize_hq_tree (line 181) | def auto_optimize_hq_tree(inputs, output, size_dict, **kwargs):

FILE: cotengra/reusable.py
  function sortedtuple (line 10) | def sortedtuple(x):
  function make_hashable (line 14) | def make_hashable(x):
  function hash_contraction_a (line 25) | def hash_contraction_a(inputs, output, size_dict):
  function hash_contraction_b (line 41) | def hash_contraction_b(inputs, output, size_dict):
  function hash_contraction (line 58) | def hash_contraction(inputs, output, size_dict, method="a"):
  class ReusableOptimizer (line 68) | class ReusableOptimizer(PathOptimizer):
    method __init__ (line 105) | def __init__(
    method last_opt (line 142) | def last_opt(self):
    method _get_path_relevant_opts (line 145) | def _get_path_relevant_opts(self):
    method auto_hash_path_relevant_opts (line 151) | def auto_hash_path_relevant_opts(self):
    method hash_query (line 161) | def hash_query(self, inputs, output, size_dict):
    method minimize (line 175) | def minimize(self):
    method update_from_tree (line 181) | def update_from_tree(self, tree, overwrite="improved"):
    method _run_optimizer (line 219) | def _run_optimizer(self, inputs, output, size_dict):
    method _maybe_run_optimizer (line 231) | def _maybe_run_optimizer(self, inputs, output, size_dict):
    method __call__ (line 261) | def __call__(self, inputs, output, size_dict, memory_limit=None):
    method _get_suboptimizer (line 265) | def _get_suboptimizer(self):
    method _deconstruct_tree (line 268) | def _deconstruct_tree(self, opt, tree):
    method _run_optimizer (line 271) | def _run_optimizer(self, inputs, output, size_dict):
    method _reconstruct_tree (line 278) | def _reconstruct_tree(self, inputs, output, size_dict, con):
    method search (line 281) | def search(self, inputs, output, size_dict):
    method cleanup (line 290) | def cleanup(self):

FILE: cotengra/schematic.py
  class Drawing (line 11) | class Drawing:
    method __init__ (line 47) | def __init__(
    method _adjust_lims (line 106) | def _adjust_lims(self, x, y):
    method text (line 139) | def text(self, coo, text, preset=None, **kwargs):
    method text_between (line 173) | def text_between(self, cooa, coob, text, preset=None, **kwargs):
    method label_ax (line 227) | def label_ax(self, x, y, text, preset=None, **kwargs):
    method label_fig (line 250) | def label_fig(self, x, y, text, preset=None, **kwargs):
    method _parse_style_for_marker (line 273) | def _parse_style_for_marker(self, coo, preset=None, **kwargs):
    method _adjust_lims_for_marker (line 295) | def _adjust_lims_for_marker(self, x, y, r):
    method circle (line 304) | def circle(self, coo, preset=None, **kwargs):
    method wedge (line 324) | def wedge(self, coo, theta1, theta2, preset=None, **kwargs):
    method dot (line 357) | def dot(self, coo, preset=None, **kwargs):
    method regular_polygon (line 376) | def regular_polygon(self, coo, preset=None, **kwargs):
    method marker (line 406) | def marker(self, coo, preset=None, **kwargs):
    method square (line 448) | def square(self, coo, preset=None, **kwargs):
    method cube (line 451) | def cube(self, coo, preset=None, **kwargs):
    method line (line 483) | def line(self, cooa, coob, preset=None, **kwargs):
    method line_offset (line 577) | def line_offset(
    method arrowhead (line 674) | def arrowhead(self, cooa, coob, preset=None, **kwargs):
    method curve (line 749) | def curve(self, coos, preset=None, **kwargs):
    method shape (line 846) | def shape(self, coos, preset=None, **kwargs):
    method rectangle (line 897) | def rectangle(self, cooa, coob, preset=None, **kwargs):
    method patch (line 915) | def patch(self, coos, preset=None, **kwargs):
    method patch_around (line 987) | def patch_around(self, coos, *, preset=None, **kwargs):
    method patch_around_circles (line 1047) | def patch_around_circles(
    method savefig (line 1134) | def savefig(self, fname, dpi=300, bbox_inches="tight"):
  function parse_style_preset (line 1138) | def parse_style_preset(presets, preset, **kwargs):
  function simple_scale (line 1169) | def simple_scale(i, j, xscale=1, yscale=1):
  function axonometric_project (line 1173) | def axonometric_project(
  function coo_to_zorder (line 1214) | def coo_to_zorder(i, j, k, xscale=1, yscale=1, zscale=1):
  function get_color (line 1240) | def get_color(
  function mod_sat (line 1300) | def mod_sat(c, mod=None, alpha=None):
  function auto_colors (line 1316) | def auto_colors(nc, alpha=None, default_sequence=False):
  function darken_color (line 1373) | def darken_color(color, factor=2 / 3):
  function average_color (line 1379) | def average_color(colors):
  function jitter_color (line 1397) | def jitter_color(color, factor=0.05):
  function set_coloring_seed (line 1415) | def set_coloring_seed(seed):
  function hash_to_nvalues (line 1427) | def hash_to_nvalues(s, nval, seed=None):
  function hash_to_color (line 1449) | def hash_to_color(
  function mean (line 1493) | def mean(xs):
  function distance (line 1503) | def distance(pa, pb):
  function get_angle (line 1511) | def get_angle(pa, pb):
  function get_rotator_and_inverse (line 1517) | def get_rotator_and_inverse(pa, pb):
  function get_control_points (line 1542) | def get_control_points(pa, pb, pc, spacing=1 / 3):
  function gen_points_around (line 1585) | def gen_points_around(coo, radius=1, resolution=12):

FILE: cotengra/scoring.py
  class Objective (line 11) | class Objective:
    method __call__ (line 16) | def __call__(self, trial):
    method __repr__ (line 23) | def __repr__(self):
    method __hash__ (line 31) | def __hash__(self):
  function ensure_basic_quantities_are_computed (line 38) | def ensure_basic_quantities_are_computed(trial):
  class ExactObjective (line 50) | class ExactObjective(Objective):
    method cost_local_tree_node (line 53) | def cost_local_tree_node(self, tree, node):
    method score_local (line 59) | def score_local(self, **kwargs):
    method score_slice_index (line 65) | def score_slice_index(self, costs, ix):
    method get_dynamic_programming_minimize (line 71) | def get_dynamic_programming_minimize(self):
  class FlopsObjective (line 78) | class FlopsObjective(ExactObjective):
    method __init__ (line 90) | def __init__(self, secondary_weight=1e-3):
    method cost_local_tree_node (line 94) | def cost_local_tree_node(self, tree, node):
    method score_local (line 97) | def score_local(self, **kwargs):
    method score_slice_index (line 106) | def score_slice_index(self, costs, ix):
    method get_dynamic_programming_minimize (line 113) | def get_dynamic_programming_minimize(self):
    method __call__ (line 116) | def __call__(self, trial):
  class WriteObjective (line 125) | class WriteObjective(ExactObjective):
    method __init__ (line 139) | def __init__(self, secondary_weight=1e-3):
    method cost_local_tree_node (line 143) | def cost_local_tree_node(self, tree, node):
    method score_local (line 146) | def score_local(self, **kwargs):
    method score_slice_index (line 155) | def score_slice_index(self, costs, ix):
    method get_dynamic_programming_minimize (line 162) | def get_dynamic_programming_minimize(self):
    method __call__ (line 165) | def __call__(self, trial):
  class SizeObjective (line 174) | class SizeObjective(ExactObjective):
    method __init__ (line 186) | def __init__(self, secondary_weight=1e-3):
    method cost_local_tree_node (line 190) | def cost_local_tree_node(self, tree, node):
    method score_local (line 193) | def score_local(self, **kwargs):
    method score_slice_index (line 202) | def score_slice_index(self, costs, ix):
    method get_dynamic_programming_minimize (line 209) | def get_dynamic_programming_minimize(self):
    method __call__ (line 212) | def __call__(self, trial):
  class ComboObjective (line 221) | class ComboObjective(ExactObjective):
    method __init__ (line 240) | def __init__(
    method cost_local_tree_node (line 247) | def cost_local_tree_node(self, tree, node):
    method score_local (line 250) | def score_local(self, **kwargs):
    method score_slice_index (line 269) | def score_slice_index(self, costs, ix):
    method get_dynamic_programming_minimize (line 276) | def get_dynamic_programming_minimize(self):
    method __call__ (line 279) | def __call__(self, trial):
  class LimitObjective (line 284) | class LimitObjective(ExactObjective):
    method __init__ (line 304) | def __init__(self, factor=DEFAULT_COMBO_FACTOR):
    method cost_local_tree_node (line 308) | def cost_local_tree_node(self, tree, node):
    method score_local (line 311) | def score_local(self, **kwargs):
    method score_slice_index (line 321) | def score_slice_index(self, costs, ix):
    method get_dynamic_programming_minimize (line 328) | def get_dynamic_programming_minimize(self):
    method __call__ (line 331) | def __call__(self, trial):
  class CompressedStatsTracker (line 339) | class CompressedStatsTracker:
    method __init__ (line 353) | def __init__(self, hg, chi):
    method copy (line 378) | def copy(self):
    method update_pre_step (line 384) | def update_pre_step(self):
    method update_pre_compress (line 388) | def update_pre_compress(self, hg, *nodes):
    method update_post_compress (line 394) | def update_post_compress(self, hg, *nodes):
    method update_pre_contract (line 399) | def update_pre_contract(self, hg, i, j):
    method update_post_contract (line 405) | def update_post_contract(self, hg, ij):
    method update_post_step (line 413) | def update_post_step(self):
    method update_score (line 420) | def update_score(self, other):
    method combo_score (line 432) | def combo_score(self):
    method score (line 436) | def score(self):
    method describe (line 439) | def describe(self, join=" "):
    method __repr__ (line 454) | def __repr__(self):
  class CompressedStatsTrackerSize (line 458) | class CompressedStatsTrackerSize(CompressedStatsTracker):
    method __init__ (line 461) | def __init__(self, hg, chi, secondary_weight=1e-3):
    method score (line 466) | def score(self):
  class CompressedStatsTrackerPeak (line 473) | class CompressedStatsTrackerPeak(CompressedStatsTracker):
    method __init__ (line 476) | def __init__(self, hg, chi, secondary_weight=1e-3):
    method score (line 481) | def score(self):
  class CompressedStatsTrackerWrite (line 488) | class CompressedStatsTrackerWrite(CompressedStatsTracker):
    method __init__ (line 491) | def __init__(self, hg, chi, secondary_weight=1e-3):
    method score (line 496) | def score(self):
  class CompressedStatsTrackerFlops (line 503) | class CompressedStatsTrackerFlops(CompressedStatsTracker):
    method __init__ (line 506) | def __init__(self, hg, chi, secondary_weight=1e-3):
    method score (line 511) | def score(self):
  class CompressedStatsTrackerCombo (line 518) | class CompressedStatsTrackerCombo(CompressedStatsTracker):
    method __init__ (line 521) | def __init__(self, hg, chi, factor=DEFAULT_COMBO_FACTOR):
    method score (line 526) | def score(self):
  class CompressedObjective (line 530) | class CompressedObjective(Objective):
    method __init__ (line 533) | def __init__(self, chi="auto", compress_late=False):
    method get_compressed_stats_tracker (line 538) | def get_compressed_stats_tracker(self, hg):
    method compute_compressed_stats (line 553) | def compute_compressed_stats(self, trial):
  class CompressedSizeObjective (line 566) | class CompressedSizeObjective(CompressedObjective):
    method __init__ (line 586) | def __init__(
    method get_compressed_stats_tracker (line 595) | def get_compressed_stats_tracker(self, hg):
    method __call__ (line 600) | def __call__(self, trial):
  class CompressedPeakObjective (line 615) | class CompressedPeakObjective(CompressedObjective):
    method __init__ (line 636) | def __init__(
    method get_compressed_stats_tracker (line 645) | def get_compressed_stats_tracker(self, hg):
    method __call__ (line 650) | def __call__(self, trial):
  class CompressedWriteObjective (line 665) | class CompressedWriteObjective(CompressedObjective):
    method __init__ (line 686) | def __init__(
    method get_compressed_stats_tracker (line 695) | def get_compressed_stats_tracker(self, hg):
    method __call__ (line 700) | def __call__(self, trial):
  class CompressedFlopsObjective (line 715) | class CompressedFlopsObjective(CompressedObjective):
    method __init__ (line 736) | def __init__(
    method get_compressed_stats_tracker (line 745) | def get_compressed_stats_tracker(self, hg):
    method __call__ (line 750) | def __call__(self, trial):
  class CompressedComboObjective (line 765) | class CompressedComboObjective(CompressedObjective):
    method __init__ (line 768) | def __init__(
    method get_compressed_stats_tracker (line 777) | def get_compressed_stats_tracker(self, hg):
    method __call__ (line 782) | def __call__(self, trial):
  function parse_minimize (line 817) | def parse_minimize(minimize):
  function _get_score_fn_str_cached (line 827) | def _get_score_fn_str_cached(minimize):
  function get_score_fn (line 880) | def get_score_fn(minimize):
  class MultiObjective (line 892) | class MultiObjective(Objective):
    method __init__ (line 895) | def __init__(self, num_configs):
    method compute_mult (line 898) | def compute_mult(self, dims):
    method estimate_node_mult (line 901) | def estimate_node_mult(self, tree, node):
    method estimate_node_cache_mult (line 906) | def estimate_node_cache_mult(self, tree, node, sliced_ind_ordering):
  class MultiObjectiveDense (line 920) | class MultiObjectiveDense(MultiObjective):
    method compute_mult (line 927) | def compute_mult(self, dims):
  function expected_coupons (line 931) | def expected_coupons(num_sub, num_total):
  class MultiObjectiveUniform (line 938) | class MultiObjectiveUniform(MultiObjective):
    method compute_mult (line 945) | def compute_mult(self, dims):
  class MultiObjectiveLinear (line 949) | class MultiObjectiveLinear(MultiObjective):
    method __init__ (line 957) | def __init__(self, num_configs, coeff=1):
    method compute_mult (line 961) | def compute_mult(self, dims):

FILE: cotengra/slicer.py
  class ContractionCosts (line 17) | class ContractionCosts:
    method __init__ (line 43) | def __init__(
    method _set_state_from (line 77) | def _set_state_from(self, other):
    method copy (line 89) | def copy(self):
    method from_contraction_tree (line 96) | def from_contraction_tree(cls, contraction_tree, **kwargs):
    method from_info (line 115) | def from_info(cls, info, **kwargs):
    method size (line 121) | def size(self):
    method flops (line 125) | def flops(self):
    method total_flops (line 129) | def total_flops(self):
    method overhead (line 133) | def overhead(self):
    method remove (line 136) | def remove(self, ix, inplace=False):
    method __repr__ (line 194) | def __repr__(self):
  class SliceFinder (line 204) | class SliceFinder:
    method __init__ (line 232) | def __init__(
    method _maybe_default (line 283) | def _maybe_default(self, attr, value):
    method best (line 288) | def best(
    method trial (line 333) | def trial(
    method search (line 409) | def search(

FILE: cotengra/utils.py
  function getter (line 20) | def getter(index):
  function groupby (line 33) | def groupby(key, seq):
  function interleave (line 45) | def interleave(seqs):
  function unique (line 57) | def unique(it):
  function deprecated (line 61) | def deprecated(fn, old_name, new_name):
  function prod (line 74) | def prod(it):
  class oset (line 82) | class oset:
    method __init__ (line 90) | def __init__(self, it=()):
    method _from_dict (line 94) | def _from_dict(cls, d):
    method from_dict (line 100) | def from_dict(cls, d):
    method copy (line 104) | def copy(self):
    method add (line 107) | def add(self, k):
    method discard (line 110) | def discard(self, k):
    method remove (line 113) | def remove(self, k):
    method clear (line 116) | def clear(self):
    method update (line 119) | def update(self, *others):
    method union (line 123) | def union(self, *others):
    method intersection_update (line 128) | def intersection_update(self, *others):
    method intersection (line 135) | def intersection(self, *others):
    method difference_update (line 145) | def difference_update(self, *others):
    method difference (line 152) | def difference(self, *others):
    method symmetric_difference (line 159) | def symmetric_difference(self, other):
    method __eq__ (line 168) | def __eq__(self, other):
    method __or__ (line 173) | def __or__(self, other):
    method __ior__ (line 176) | def __ior__(self, other):
    method __and__ (line 180) | def __and__(self, other):
    method __iand__ (line 183) | def __iand__(self, other):
    method __sub__ (line 187) | def __sub__(self, other):
    method __isub__ (line 190) | def __isub__(self, other):
    method __len__ (line 194) | def __len__(self):
    method __iter__ (line 197) | def __iter__(self):
    method __contains__ (line 200) | def __contains__(self, x):
    method __repr__ (line 203) | def __repr__(self):
  class MaxCounter (line 207) | class MaxCounter:
    method __init__ (line 239) | def __init__(self, it=None):
    method copy (line 246) | def copy(self):
    method discard (line 252) | def discard(self, x):
    method add (line 267) | def add(self, x):
    method max (line 272) | def max(self):
  class BitSet (line 277) | class BitSet:
    method __init__ (line 280) | def __init__(self, it):
    method asint (line 288) | def asint(self, elem):
    method fromint (line 291) | def fromint(self, n):
    method frommembers (line 294) | def frommembers(self, it=()):
  class BitMembers (line 300) | class BitMembers:
    method fromint (line 304) | def fromint(cls, bitset, n):
    method frommembers (line 311) | def frommembers(cls, bitset, it=()):
    method __int__ (line 317) | def __int__(self):
    method __eq__ (line 322) | def __eq__(self, other):
    method __len__ (line 327) | def __len__(self):
    method __iter__ (line 330) | def __iter__(self):
    method add (line 337) | def add(self, elem):
    method clear (line 340) | def clear(self):
    method copy (line 343) | def copy(self):
    method __bool__ (line 346) | def __bool__(self):
    method __contains__ (line 349) | def __contains__(self, elem):
    method discard (line 352) | def discard(self, elem):
    method remove (line 355) | def remove(self, elem):
    method difference_update (line 360) | def difference_update(self, *others):
    method difference (line 366) | def difference(self, *others):
    method intersection_update (line 373) | def intersection_update(self, *others):
    method intersection (line 379) | def intersection(self, *others):
    method isdisjoint (line 386) | def isdisjoint(self, other):
    method issubset (line 389) | def issubset(self, other):
    method issuperset (line 392) | def issuperset(self, other):
    method symmetric_difference_update (line 395) | def symmetric_difference_update(self, other):
    method symmetric_difference (line 400) | def symmetric_difference(self, other):
    method update (line 405) | def update(self, *others):
    method union (line 410) | def union(self, *others):
    method __repr__ (line 415) | def __repr__(self):
  class DiskDict (line 419) | class DiskDict:
    method __init__ (line 446) | def __init__(self, directory=None, max_retries=3, retry_delay=0.01):
    method clear (line 457) | def clear(self):
    method cleanup (line 466) | def cleanup(self, delete_dir=False):
    method __contains__ (line 476) | def __contains__(self, k):
    method __setitem__ (line 488) | def __setitem__(self, k, v):
    method __delitem__ (line 502) | def __delitem__(self, k):
    method __getitem__ (line 521) | def __getitem__(self, k):
    method get (line 553) | def get(self, k, default=None):
    method keys (line 559) | def keys(self):
    method values (line 574) | def values(self):
    method items (line 578) | def items(self):
  function get_rng (line 583) | def get_rng(seed=None):
  class GumbelBatchedGenerator (line 605) | class GumbelBatchedGenerator:
    method __init__ (line 608) | def __init__(self, seed=None):
    method __call__ (line 611) | def __call__(self):
  class BadTrial (line 615) | class BadTrial(Exception):
  function compute_size_by_dict (line 624) | def compute_size_by_dict(indices, size_dict):
  function get_symbol (line 657) | def get_symbol(i):
  function get_symbol_map (line 689) | def get_symbol_map(inputs):
  function rand_equation (line 711) | def rand_equation(
  function tree_equation (line 791) | def tree_equation(
  function networkx_graph_to_equation (line 828) | def networkx_graph_to_equation(
  function randreg_equation (line 872) | def randreg_equation(
  function perverse_equation (line 908) | def perverse_equation(
  function rand_tree (line 960) | def rand_tree(
  function lattice_equation (line 991) | def lattice_equation(dims, cyclic=False, d_min=2, d_max=None, seed=None):
  function find_output_str (line 1057) | def find_output_str(lhs):
  function eq_to_inputs_output (line 1083) | def eq_to_inputs_output(eq):
  function inputs_output_to_eq (line 1108) | def inputs_output_to_eq(inputs, output, canonicalize=False):
  function shapes_inputs_to_size_dict (line 1135) | def shapes_inputs_to_size_dict(shapes, inputs):
  function make_rand_size_dict_from_inputs (line 1159) | def make_rand_size_dict_from_inputs(inputs, d_min=2, d_max=3, seed=None):
  function make_shapes_from_inputs (line 1188) | def make_shapes_from_inputs(inputs, size_dict):
  function make_arrays_from_inputs (line 1206) | def make_arrays_from_inputs(inputs, size_dict, seed=None, dtype="float64"):
  function make_arrays_from_eq (line 1250) | def make_arrays_from_eq(
  function find_output_from_inputs (line 1289) | def find_output_from_inputs(inputs):
  function is_edge_path (line 1321) | def is_edge_path(optimize):
  function canonicalize_inputs (line 1330) | def canonicalize_inputs(
  function convert_from_interleaved (line 1415) | def convert_from_interleaved(args):
  function check_ellipsis (line 1433) | def check_ellipsis(term):
  function parse_equation_ellipses (line 1455) | def parse_equation_ellipses(eq, shapes, tuples=False):
  function parse_einsum_input (line 1517) | def parse_einsum_input(args, shapes=False, tuples=False, constants=None):
  function save_to_json (line 1565) | def save_to_json(inputs, output, size_dict, filename):
  function load_from_json (line 1591) | def load_from_json(filename):

FILE: docs/_pygments/_pygments_dark.py
  class MarianaDark (line 39) | class MarianaDark(Style):

FILE: docs/_pygments/_pygments_light.py
  class MarianaLight (line 39) | class MarianaLight(Style):

FILE: docs/conf.py
  function linkcode_resolve (line 96) | def linkcode_resolve(domain, info):

FILE: tests/test_backends.py
  function _has (line 22) | def _has(name):
  function _setup_backend (line 64) | def _setup_backend(backend):
  function _to_backend (line 72) | def _to_backend(x, backend):
  function test_contract_backend (line 102) | def test_contract_backend(backend, dtype, strip_exponent, slicing):

FILE: tests/test_compressed.py
  function test_compressed_greedy (line 6) | def test_compressed_greedy():
  function test_compressed_span (line 24) | def test_compressed_span():
  function test_compressed_agglom (line 42) | def test_compressed_agglom():
  function test_compressed_reconfigure (line 64) | def test_compressed_reconfigure(order_only):
  function test_compressed_windowed_reconfigure (line 78) | def test_compressed_windowed_reconfigure():

FILE: tests/test_compute.py
  function test_basic_equations (line 107) | def test_basic_equations(eq, dtype, strip_exponent):
  function test_contraction_tree_rand_equation (line 127) | def test_contraction_tree_rand_equation(
  function test_lazy_sliced_output_reduce (line 188) | def test_lazy_sliced_output_reduce():
  function test_exponent_stripping (line 221) | def test_exponent_stripping(autojit, dtype):
  function test_einsum_expression (line 255) | def test_einsum_expression(

FILE: tests/test_hypergraph.py
  function test_shortest_distances (line 6) | def test_shortest_distances():
  function test_resistance_centrality (line 16) | def test_resistance_centrality():
  function test_simple_centrality (line 25) | def test_simple_centrality():
  function test_compute_loops (line 34) | def test_compute_loops():
  function test_plot_nonconsec (line 47) | def test_plot_nonconsec():

FILE: tests/test_interface.py
  function test_array_contract_path_cache (line 8) | def test_array_contract_path_cache(optimize_type):
  function test_array_contract_expression_cache (line 37) | def test_array_contract_expression_cache(optimize_type, strip_exponent):
  function test_einsum_formats_interleaved (line 85) | def test_einsum_formats_interleaved():
  function test_einsum_ellipses (line 110) | def test_einsum_ellipses(eq, shapes):
  function test_slice_and_strip_exponent (line 117) | def test_slice_and_strip_exponent():
  function test_null_tree (line 166) | def test_null_tree():

FILE: tests/test_optimizers.py
  function rand_reg_contract (line 26) | def rand_reg_contract(n, deg, seed=None):
  function contraction_20_5 (line 50) | def contraction_20_5():
  function test_basic (line 87) | def test_basic(contraction_20_5, method, requires):
  function test_single_term_uniform (line 106) | def test_single_term_uniform(inputs, output, size_dict, method, requires):
  function test_single_term_direct (line 125) | def test_single_term_direct(inputs, output, size_dict, method, requires):
  function test_hyper (line 152) | def test_hyper(contraction_20_5, optlib, requires, parallel):
  function test_hyper_sbplx_restart_patience_triggers_local_restart (line 170) | def test_hyper_sbplx_restart_patience_triggers_local_restart():
  function test_hyper_sbplx_partition_uses_goodness_heuristic (line 201) | def test_hyper_sbplx_partition_uses_goodness_heuristic():
  function test_hyper_sbplx_partition_greedy_equal_chunks (line 218) | def test_hyper_sbplx_partition_greedy_equal_chunks():
  function test_hyper_sbplx_cycle_step_scaling_clamped_by_omega (line 238) | def test_hyper_sbplx_cycle_step_scaling_clamped_by_omega():
  function test_hyper_sbplx_cycle_convergence_is_relative_to_scale (line 257) | def test_hyper_sbplx_cycle_convergence_is_relative_to_scale():
  function test_hyper_sbplx_repeated_restarts_escalate_to_global_restart (line 272) | def test_hyper_sbplx_repeated_restarts_escalate_to_global_restart():
  function test_hyper_sbplx_improvement_resets_restart_counters (line 303) | def test_hyper_sbplx_improvement_resets_restart_counters():
  function test_hyper_sbplx_stale_nm_results_ignored_after_restart (line 324) | def test_hyper_sbplx_stale_nm_results_ignored_after_restart():
  function test_nmcore_inject_vertex_diameter_gate_accepts_nearby (line 346) | def test_nmcore_inject_vertex_diameter_gate_accepts_nearby():
  function test_nmcore_inject_vertex_diameter_gate_rejects_far (line 365) | def test_nmcore_inject_vertex_diameter_gate_rejects_far():
  function test_nmcore_inject_vertex_early_convergence_signal (line 383) | def test_nmcore_inject_vertex_early_convergence_signal():
  function test_nmcore_inject_vertex_inf_diameter_fraction (line 403) | def test_nmcore_inject_vertex_inf_diameter_fraction():
  function test_nm_sampler_adaptive_filler_scale (line 420) | def test_nm_sampler_adaptive_filler_scale():
  function test_sbplx_sampler_adaptive_filler_scale (line 457) | def test_sbplx_sampler_adaptive_filler_scale():
  function test_nmcore_psi_convergence_uses_relative_diameter (line 479) | def test_nmcore_psi_convergence_uses_relative_diameter():
  function test_nm_sampler_exits_init_phase_with_inf_scores (line 496) | def test_nm_sampler_exits_init_phase_with_inf_scores():
  function test_cmaes_report_result_handles_inf (line 523) | def test_cmaes_report_result_handles_inf():
  function test_optuna_report_result_handles_inf (line 542) | def test_optuna_report_result_handles_inf():
  function test_binaries (line 581) | def test_binaries(contraction_20_5, optimize):
  function test_hyper_slicer (line 590) | def test_hyper_slicer(parallel):
  function test_hyper_reconf (line 613) | def test_hyper_reconf(parallel):
  function test_hyper_slicer_reconf (line 633) | def test_hyper_slicer_reconf(parallel):
  function test_insane_nested (line 661) | def test_insane_nested(parallel_backend):
  function test_plotting (line 694) | def test_plotting():
  function test_auto_optimizers_threadsafe (line 712) | def test_auto_optimizers_threadsafe():
  function test_reusable_optimizes_overwrite_improved (line 731) | def test_reusable_optimizes_overwrite_improved():

FILE: tests/test_parallel.py
  function _reset_parallel_state (line 18) | def _reset_parallel_state():
  function _check_worker_flag (line 27) | def _check_worker_flag():
  function _worker_auto_returns_none (line 32) | def _worker_auto_returns_none():
  function _subprocess_auto_returns_none (line 37) | def _subprocess_auto_returns_none(q):
  function test_auto_creates_pool (line 42) | def test_auto_creates_pool():
  function test_default_backend_preference (line 52) | def test_default_backend_preference():
  function test_explicit_process_backend_reuses_auto (line 60) | def test_explicit_process_backend_reuses_auto():
  function test_explicit_loky_reuses_auto (line 73) | def test_explicit_loky_reuses_auto():
  function test_threads_remain_explicit_only_for_auto (line 89) | def test_threads_remain_explicit_only_for_auto():
  function test_threads_do_not_clobber_remembered_auto_backend (line 103) | def test_threads_do_not_clobber_remembered_auto_backend():
  function test_worker_flag_prevents_auto_pool (line 116) | def test_worker_flag_prevents_auto_pool():
  function test_submit_sets_worker_flag_for_process_pools (line 127) | def test_submit_sets_worker_flag_for_process_pools():
  function test_submit_does_not_mark_thread_workers (line 138) | def test_submit_does_not_mark_thread_workers():
  function test_spawn_like_workers_disable_auto (line 153) | def test_spawn_like_workers_disable_auto(start_method):
  function test_subprocess_no_auto_pool_fork (line 167) | def test_subprocess_no_auto_pool_fork():
  function test_random_greedy_parallel_process_backend (line 188) | def test_random_greedy_parallel_process_backend(monkeypatch):
  function test_pool_persists_across_calls (line 211) | def test_pool_persists_across_calls():
  function test_explicit_parallel_true_sets_pid (line 222) | def test_explicit_parallel_true_sets_pid():
  function test_pid_mismatch_returns_none (line 233) | def test_pid_mismatch_returns_none():

FILE: tests/test_paths_basic.py
  function test_manual_cases (line 99) | def test_manual_cases(eq, which):
  function test_basic_rand (line 115) | def test_basic_rand(seed, which):
  function test_random_greedy_track_flops (line 143) | def test_random_greedy_track_flops(seed):
  function test_basic_perverse (line 177) | def test_basic_perverse(seed, which):
  function test_optimal_lattice_eq (line 194) | def test_optimal_lattice_eq():
  function test_random_optimize (line 208) | def test_random_optimize():
  function test_edgesort_optimize (line 221) | def test_edgesort_optimize():
  function test_edgesort_optimize_manual_labelled_reverse (line 234) | def test_edgesort_optimize_manual_labelled_reverse():

FILE: tests/test_slicer.py
  function test_slicer (line 6) | def test_slicer():
  function test_plot (line 16) | def test_plot():
  function test_plot_alt (line 28) | def test_plot_alt():

FILE: tests/test_tree.py
  function test_contraction_tree_equivalency (line 7) | def test_contraction_tree_equivalency(nodeops):
  function test_contraction_tree_from_path_incomplete (line 47) | def test_contraction_tree_from_path_incomplete(ssa, autocomplete, nodeops):
  function test_tree_incomplete (line 90) | def test_tree_incomplete(nodeops):
  function test_reconfigure (line 116) | def test_reconfigure(select, minimize):
  function test_reconfigure_forested (line 171) | def test_reconfigure_forested(parallel, requires):
  function test_reconfigure_with_n_smaller_than_subtree_size (line 198) | def test_reconfigure_with_n_smaller_than_subtree_size():
  function test_slice_and_reconfigure (line 214) | def test_slice_and_reconfigure(forested, parallel, requires):
  function test_plot (line 238) | def test_plot():
  function test_plot_alt (line 258) | def test_plot_alt():
  function test_compressed_rank (line 271) | def test_compressed_rank(optimize):
  function test_print_contractions (line 283) | def test_print_contractions(seed):
  function test_remove_ind (line 287) | def test_remove_ind():
  function test_restore_ind (line 339) | def test_restore_ind(ind):
  function test_unslice_rand (line 353) | def test_unslice_rand():
  function test_unslice_all (line 367) | def test_unslice_all():
  function test_reslice_and_reconfigure (line 378) | def test_reslice_and_reconfigure():
  function test_tree_with_one_node (line 389) | def test_tree_with_one_node():
  function test_slice_and_restore_preprocessed_inds (line 400) | def test_slice_and_restore_preprocessed_inds(seed, nodeops):
  function test_tree_from_edge_path (line 436) | def test_tree_from_edge_path(n, seed, nodeops):
  function test_tree_build_divide_labels (line 456) | def test_tree_build_divide_labels():
  function test_tree_build_agglom_labels (line 466) | def test_tree_build_agglom_labels():
  function test_tree_peak_size_reorder (line 477) | def test_tree_peak_size_reorder(seed):
  function test_contraction_tree_from_ssa_path_complete (line 491) | def test_contraction_tree_from_ssa_path_complete(nodeops):
  function test_ssa_subgraph_tracking (line 513) | def test_ssa_subgraph_tracking():
  function test_ssa_surface_order (line 537) | def test_ssa_surface_order():
  function test_simulated_anneal_tree (line 554) | def test_simulated_anneal_tree():
  function test_simulated_anneal_tree_with_slicing (line 570) | def test_simulated_anneal_tree_with_slicing():
  function test_tree_single_input_nosimp (line 590) | def test_tree_single_input_nosimp(path, slice):
  function test_tree_single_input_simp (line 612) | def test_tree_single_input_simp(path, slice):
  function test_tree_single_input_transpose (line 635) | def test_tree_single_input_transpose(path, slice):
Condensed preview — 118 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (9,162K chars).
[
  {
    "path": ".codecov.yml",
    "chars": 216,
    "preview": "codecov:\n  require_ci_to_pass: yes\n\ncoverage:\n  range: 50..100\n  status:\n    project:\n      default:\n        information"
  },
  {
    "path": ".gitattributes",
    "chars": 618,
    "preview": "# Auto detect text files and perform LF normalization\n* text=auto\n\n# Standard to msysgit\n*.doc\t diff=astextplain\n*.DOC\t "
  },
  {
    "path": ".github/dependabot.yml",
    "chars": 619,
    "preview": "# To get started with Dependabot version updates, you'll need to specify which\n# package ecosystems to update and where "
  },
  {
    "path": ".github/workflows/pypi-release.yml",
    "chars": 2725,
    "preview": "name: Build and Upload cotengra to PyPI\non:\n  release:\n    types:\n      - published\n  push:\n    tags:\n      - 'v*'\n\njobs"
  },
  {
    "path": ".github/workflows/test.yml",
    "chars": 1357,
    "preview": "name: Tests\n\non:\n  workflow_dispatch:\n  push:\n  pull_request:\n\ndefaults:\n  run:\n    shell: bash -l {0}\n\njobs:\n  run-test"
  },
  {
    "path": ".gitignore",
    "chars": 879,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": ".readthedocs.yml",
    "chars": 369,
    "preview": "version: 2\n\nsphinx:\n  configuration: docs/conf.py\n\nbuild:\n  os: \"ubuntu-24.04\"\n  tools:\n    python: \"latest\"\n  jobs:\n   "
  },
  {
    "path": "LICENSE.md",
    "chars": 11358,
    "preview": "\n                                 Apache License\n                           Version 2.0, January 2004\n                  "
  },
  {
    "path": "MANIFEST.in",
    "chars": 133,
    "preview": "include cotengra/kahypar_profiles/*.ini\ninclude cotengra/kahypar_profiles/old/*.ini\ninclude LICENSE.md\ninclude README.md"
  },
  {
    "path": "README.md",
    "chars": 2126,
    "preview": "<p align=\"left\"><img src=\"https://imgur.com/OM5XyaD.png\" alt=\"cotengra\" width=\"400px\"></p>\n\n[![tests](https://github.com"
  },
  {
    "path": "cotengra/__init__.py",
    "chars": 9156,
    "preview": "\"\"\"Hyper optimized contraction trees for large tensor networks and einsums.\"\"\"\n\nfrom importlib.metadata import PackageNo"
  },
  {
    "path": "cotengra/contract.py",
    "chars": 31192,
    "preview": "\"\"\"Functionality relating to actually contracting.\"\"\"\n\nimport contextlib\nimport functools\nimport itertools\nimport operat"
  },
  {
    "path": "cotengra/core.py",
    "chars": 157217,
    "preview": "\"\"\"Core contraction tree data structure and methods.\"\"\"\n\nimport collections\nimport functools\nimport itertools\nimport mat"
  },
  {
    "path": "cotengra/core_multi.py",
    "chars": 8292,
    "preview": "import math\n\nfrom .core import ContractionTree, cached_node_property\n\n\nclass ContractionTreeMulti(ContractionTree):\n    "
  },
  {
    "path": "cotengra/experimental/__init__.py",
    "chars": 63,
    "preview": "\"\"\"Potentially useful but experimental (untested) features,\"\"\"\n"
  },
  {
    "path": "cotengra/experimental/hyper_de.py",
    "chars": 10307,
    "preview": "\"\"\"Hyper optimization using a pure Python differential evolution strategy.\"\"\"\n\nfrom ..utils import get_rng\nfrom ._param_"
  },
  {
    "path": "cotengra/experimental/hyper_pe.py",
    "chars": 12845,
    "preview": "\"\"\"Hyper optimization using parallel evolution with ranked sigma assignment.\"\"\"\n\nimport math\n\nfrom ..utils import get_rn"
  },
  {
    "path": "cotengra/experimental/hyper_pymoo.py",
    "chars": 5832,
    "preview": "\"\"\"Hyper optimization using pymoo single-objective algorithms.\n\nThis backend currently supports serial optimization only"
  },
  {
    "path": "cotengra/experimental/hyper_scipy.py",
    "chars": 10335,
    "preview": "\"\"\"Hyper optimization using scipy gradient-free optimizers.\n\nSupported methods: ``differential_evolution``, ``dual_annea"
  },
  {
    "path": "cotengra/experimental/hyper_smac.py",
    "chars": 4137,
    "preview": "\"\"\"Hyper parameter optimization using SMAC3.\n\nhttps://automl.github.io/SMAC3/latest/\n\"\"\"\n\nfrom .hyper import HyperOptLib"
  },
  {
    "path": "cotengra/experimental/multi.ipynb",
    "chars": 123609,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"id\": \"9576af0f-adc3-44e4-88d0-7a57dc97a1d5\",\n   \""
  },
  {
    "path": "cotengra/experimental/path_compressed_branchbound.py",
    "chars": 15858,
    "preview": "\"\"\"Compressed contraction path finding using branch and bound.\"\"\"\n\nimport collections\nimport heapq\nimport itertools\nimpo"
  },
  {
    "path": "cotengra/experimental/path_compressed_mcts.py",
    "chars": 10505,
    "preview": "\"\"\"Compressed contraction tree search using monte carlo tree search.\"\"\"\n\nimport math\n\nfrom ..core import ContractionTree"
  },
  {
    "path": "cotengra/experimental/scoring.py",
    "chars": 3112,
    "preview": "class CompressedTracedObjective(Objective):\n    def __init__(self, chi, compress_late=False, r=1):\n        self.chi = ch"
  },
  {
    "path": "cotengra/hypergraph.py",
    "chars": 28136,
    "preview": "\"\"\"Simple hypergraph (and also linegraph) representations for simulating\ncontractions.\n\"\"\"\n\nimport collections\nimport it"
  },
  {
    "path": "cotengra/hyperoptimizers/__init__.py",
    "chars": 77,
    "preview": "\"\"\"Different hyper (or meta) optimizers for sampling paramtetrized trees.\"\"\"\n"
  },
  {
    "path": "cotengra/hyperoptimizers/_param_mapping.py",
    "chars": 7084,
    "preview": "\"\"\"Shared parameter mapping utilities for hyper-optimization backends.\n\nProvides classes for mapping heterogeneous param"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper.py",
    "chars": 40489,
    "preview": "\"\"\"Base hyper optimization functionality.\"\"\"\n\nimport functools\nimport importlib\nimport re\nimport time\nimport warnings\nfr"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_cmaes.py",
    "chars": 3135,
    "preview": "\"\"\"Hyper parameter optimization using cmaes, as implemented by\n\nhttps://github.com/CyberAgentAILab/cmaes.\n\n\"\"\"\n\nfrom ._p"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_es.py",
    "chars": 17372,
    "preview": "\"\"\"Hyper optimization using a steady-state diagonal evolutionary strategy.\"\"\"\n\nimport bisect\nimport math\n\nfrom ..utils i"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_neldermead.py",
    "chars": 33414,
    "preview": "\"\"\"Hyper optimization using the Nelder-Mead simplex method.\"\"\"\n\nimport warnings\n\nfrom ..utils import get_rng\nfrom ._para"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_nevergrad.py",
    "chars": 4574,
    "preview": "\"\"\"Hyper optimization using nevergrad.\"\"\"\n\nfrom .hyper import HyperOptLib, register_hyper_optlib\n\n\ndef convert_param_to_"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_optuna.py",
    "chars": 2860,
    "preview": "\"\"\"Hyper optimization using optuna.\"\"\"\n\nimport warnings\n\nfrom .hyper import HyperOptLib, register_hyper_optlib\n\n\ndef mak"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_random.py",
    "chars": 8726,
    "preview": "\"\"\"Hyper optimization using random or Latin Hypercube sampling.\"\"\"\n\nimport functools\nimport math\n\nfrom ..utils import ge"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_sbplx.py",
    "chars": 27816,
    "preview": "\"\"\"Hyper optimization using the Sbplx method (Rowan, 1990).\"\"\"\n\nimport warnings\n\nfrom ..utils import get_rng\nfrom ._para"
  },
  {
    "path": "cotengra/hyperoptimizers/hyper_skopt.py",
    "chars": 4464,
    "preview": "\"\"\"Hyper optimization using scikit-optimize.\"\"\"\n\nfrom .hyper import HyperOptLib, register_hyper_optlib\n\n\ndef convert_par"
  },
  {
    "path": "cotengra/interface.py",
    "chars": 38498,
    "preview": "\"\"\"High-level interface functions to cotengra.\"\"\"\n\nimport functools\n\nimport autoray as ar\n\nfrom .core import Contraction"
  },
  {
    "path": "cotengra/nodeops.py",
    "chars": 9225,
    "preview": "from functools import reduce\nfrom operator import or_\n\n\nclass NodeOpsFrozenset:\n    \"\"\"Namespace for interacting with fr"
  },
  {
    "path": "cotengra/oe.py",
    "chars": 554,
    "preview": "\"\"\"`opt_einsum` interface.\"\"\"\n\ntry:\n    from opt_einsum.paths import PathOptimizer, get_path_fn, register_path_fn\n\n    o"
  },
  {
    "path": "cotengra/parallel.py",
    "chars": 19490,
    "preview": "\"\"\"Interface for parallelism.\n\nThis module centralizes cotengra's lightweight parallel execution logic. The\nmain user en"
  },
  {
    "path": "cotengra/pathfinders/__init__.py",
    "chars": 45,
    "preview": "\"\"\"Parametrized contraction tree finders.\"\"\"\n"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/cut_kKaHyPar_sea20.ini",
    "chars": 1923,
    "preview": "# general\nmode=direct\nobjective=cut\nseed=-1\ncmaxnet=1000\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use-"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/cut_rKaHyPar_sea20.ini",
    "chars": 1577,
    "preview": "# general\nmode=recursive\nobjective=cut\nseed=-1\ncmaxnet=-1\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/km1_kKaHyPar_sea20.ini",
    "chars": 1927,
    "preview": "# general\nmode=direct\nobjective=km1\nseed=-1\ncmaxnet=1000\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use-"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/km1_rKaHyPar_sea20.ini",
    "chars": 1577,
    "preview": "# general\nmode=recursive\nobjective=km1\nseed=-1\ncmaxnet=-1\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/old/cut_kKaHyPar_sea20.ini",
    "chars": 1781,
    "preview": "# general\nmode=direct\nobjective=cut\nseed=-1\ncmaxnet=1000\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use-"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/old/cut_rKaHyPar_sea20.ini",
    "chars": 1435,
    "preview": "# general\nmode=recursive\nobjective=cut\nseed=-1\ncmaxnet=-1\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/old/km1_kKaHyPar_sea20.ini",
    "chars": 1785,
    "preview": "# general\nmode=direct\nobjective=km1\nseed=-1\ncmaxnet=1000\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use-"
  },
  {
    "path": "cotengra/pathfinders/kahypar_profiles/old/km1_rKaHyPar_sea20.ini",
    "chars": 1435,
    "preview": "# general\nmode=recursive\nobjective=km1\nseed=-1\ncmaxnet=-1\nvcycles=0\n# main -> preprocessing -> min hash sparsifier\np-use"
  },
  {
    "path": "cotengra/pathfinders/path_basic.py",
    "chars": 56010,
    "preview": "\"\"\"Basic optimization routines.\"\"\"\n\nimport bisect\nimport functools\nimport heapq\nimport itertools\nimport math\n\nfrom ..cor"
  },
  {
    "path": "cotengra/pathfinders/path_compressed.py",
    "chars": 16930,
    "preview": "\"\"\"Compressed contraction tree finding routines.\"\"\"\n\nimport heapq\nimport itertools\n\nfrom ..core import get_hypergraph\nfr"
  },
  {
    "path": "cotengra/pathfinders/path_compressed_greedy.py",
    "chars": 17825,
    "preview": "\"\"\"Greedy contraction tree finders.\"\"\"\n\nimport collections\nimport heapq\nimport itertools\nimport math\n\nfrom ..core import"
  },
  {
    "path": "cotengra/pathfinders/path_edgesort.py",
    "chars": 1087,
    "preview": "from ..core import ContractionTree\nfrom ..oe import PathOptimizer\n\n\nclass EdgeSortOptimizer(PathOptimizer):\n    \"\"\"A pat"
  },
  {
    "path": "cotengra/pathfinders/path_flowcutter.py",
    "chars": 3052,
    "preview": "\"\"\"Flowcutter based pathfinder.\"\"\"\n\nimport re\nimport signal\nimport subprocess\nimport tempfile\nimport time\nimport warning"
  },
  {
    "path": "cotengra/pathfinders/path_greedy.py",
    "chars": 1708,
    "preview": "import functools\n\nfrom ..core import ContractionTree, jitter_dict\nfrom ..hyperoptimizers.hyper import register_hyper_fun"
  },
  {
    "path": "cotengra/pathfinders/path_igraph.py",
    "chars": 6040,
    "preview": "\"\"\"igraph based pathfinders.\"\"\"\n\nimport functools\nfrom collections import defaultdict\n\nfrom ..core import (\n    Contract"
  },
  {
    "path": "cotengra/pathfinders/path_kahypar.py",
    "chars": 6578,
    "preview": "\"\"\"Contraction tree finders using kahypar hypergraph partitioning.\"\"\"\n\nimport functools\nimport itertools\nfrom os.path im"
  },
  {
    "path": "cotengra/pathfinders/path_labels.py",
    "chars": 4713,
    "preview": "\"\"\"Contraction tree finders using pure python 'labels' hypergraph partitioning.\"\"\"\n\nimport collections\nimport math\n\nfrom"
  },
  {
    "path": "cotengra/pathfinders/path_quickbb.py",
    "chars": 4221,
    "preview": "\"\"\"Quickbb based pathfinder.\"\"\"\n\nimport re\nimport signal\nimport subprocess\nimport tempfile\nimport time\nimport warnings\n\n"
  },
  {
    "path": "cotengra/pathfinders/path_random.py",
    "chars": 1450,
    "preview": "\"\"\"Purely random pathfinder, for initialization and testing purposes.\"\"\"\n\nfrom ..core import ContractionTree\nfrom ..hype"
  },
  {
    "path": "cotengra/pathfinders/path_simulated_annealing.py",
    "chars": 20917,
    "preview": "import collections.abc\nimport functools\nimport itertools\nimport math\nimport time\nfrom numbers import Integral\n\nfrom ..pa"
  },
  {
    "path": "cotengra/pathfinders/treedecomp.py",
    "chars": 5668,
    "preview": "\"\"\"\nThe following functions are adapted from the repository:\n\n    https://github.com/TheoryInPractice/ConSequences\n\nasso"
  },
  {
    "path": "cotengra/plot.py",
    "chars": 55982,
    "preview": "\"\"\"Hypergraph, optimizer, and contraction tree visualization tools.\"\"\"\n\nimport collections\nimport functools\nimport impor"
  },
  {
    "path": "cotengra/presets.py",
    "chars": 6784,
    "preview": "\"\"\"Preset configured optimizers.\"\"\"\n\nimport functools\nimport threading\n\nfrom .core import ContractionTree\nfrom .hyperopt"
  },
  {
    "path": "cotengra/reusable.py",
    "chars": 10559,
    "preview": "import collections\nimport hashlib\nimport pickle\nimport threading\n\nfrom .oe import PathOptimizer\nfrom .utils import DiskD"
  },
  {
    "path": "cotengra/schematic.py",
    "chars": 53348,
    "preview": "\"\"\"Draw 2D and pseudo-3D diagrams programmatically using matplotlib.\"\"\"\n\nimport functools\nimport warnings\nfrom math impo"
  },
  {
    "path": "cotengra/scoring.py",
    "chars": 28898,
    "preview": "\"\"\"Objects for defining and customizing the target cost of a contraction.\"\"\"\n\nimport functools\nimport math\nimport re\n\n# "
  },
  {
    "path": "cotengra/slicer.py",
    "chars": 14487,
    "preview": "\"\"\"Functionality for identifying indices to sliced.\"\"\"\n\nimport collections\nfrom math import log\n\nfrom .core import Contr"
  },
  {
    "path": "cotengra/utils.py",
    "chars": 43691,
    "preview": "\"\"\"Various utilities for cotengra.\"\"\"\n\nimport collections\nimport functools\nimport itertools\nimport math\nimport operator\n"
  },
  {
    "path": "docs/Makefile",
    "chars": 634,
    "preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line, and also\n# from the "
  },
  {
    "path": "docs/_pygments/_pygments_dark.py",
    "chars": 4136,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nPygments version of the sublime Mariana theme.\n\nPygments template by Jan T. Sott (https://gi"
  },
  {
    "path": "docs/_pygments/_pygments_light.py",
    "chars": 4145,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nPygments (light) version of the sublime Mariana theme.\n\nPygments template by Jan T. Sott (ht"
  },
  {
    "path": "docs/_static/my-styles.css",
    "chars": 1223,
    "preview": "@import url('https://fonts.googleapis.com/css2?family=Atkinson+Hyperlegible:ital,wght@0,400;0,700;1,400;1,700&family=Spl"
  },
  {
    "path": "docs/advanced.ipynb",
    "chars": 21681,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5642ace5-fcab-4ca8-a96c-dcd7b3d51eb2\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "docs/basics.ipynb",
    "chars": 48141,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9c8e1ba6-6b35-4dec-830e-686299698e2b\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "docs/changelog.md",
    "chars": 11756,
    "preview": "# Changelog\n\n## v0.7.5 (2025-06-12)\n\n**Enhancements**\n\n- [`ContractionTree.print_contractions`](cotengra.ContractionTree"
  },
  {
    "path": "docs/conf.py",
    "chars": 4325,
    "preview": "# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see t"
  },
  {
    "path": "docs/contraction.ipynb",
    "chars": 289705,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"69e58b47-c160-40c4-a12e-e1299918b5af\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "docs/examples/ex_compressed_contraction.ipynb",
    "chars": 1369094,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b4ea66a9-262e-4909-bfd6-8025935ec0b8\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "docs/examples/ex_large_output_lazy.ipynb",
    "chars": 359907,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"48db7bce\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Contracting "
  },
  {
    "path": "docs/examples/ex_trace_contraction_to_matmuls.ipynb",
    "chars": 96885,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"551a15b6\",\n   \"metadata\": {},\n   \"source\": [\n    \"(ex_extract_co"
  },
  {
    "path": "docs/high-level-interface.ipynb",
    "chars": 195780,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"bb4240d5-bb9d-429c-b506-ccad03801ad0\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "docs/index.md",
    "chars": 3403,
    "preview": "<p align=\"center\"><img src=\"https://imgur.com/jMO138y.png\" alt=\"cotengra\" width=\"60%\" height=\"60%\"></p>\n\n[![tests](https"
  },
  {
    "path": "docs/index_examples.md",
    "chars": 186,
    "preview": "# Examples\n\n```{toctree}\n:caption: Examples\n:maxdepth: 2\n\nexamples/ex_large_output_lazy.ipynb\nexamples/ex_compressed_con"
  },
  {
    "path": "docs/installation.md",
    "chars": 4513,
    "preview": "# Installation\n\n`cotengra` is available on both [pypi](https://pypi.org/project/cotengra/) and\n[conda-forge](https://ana"
  },
  {
    "path": "docs/make.bat",
    "chars": 765,
    "preview": "@ECHO OFF\n\npushd %~dp0\n\nREM Command file for Sphinx documentation\n\nif \"%SPHINXBUILD%\" == \"\" (\n\tset SPHINXBUILD=sphinx-bu"
  },
  {
    "path": "docs/trees.ipynb",
    "chars": 167534,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"43a0bfd5-762c-42c4-bf40-7c7cba1c33f4\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "docs/visualization.ipynb",
    "chars": 648208,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e07b8472-d0ed-4d9f-b35d-9d3d515e9823\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "examples/Example - Reproducing 2005.06787.ipynb",
    "chars": 673444,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"nervous-romance\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we"
  },
  {
    "path": "examples/Example - Reproducing 2103-03074.ipynb",
    "chars": 673032,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"acquired-yukon\",\n   \"metadata\": {},\n   \"source\": [\n    \"Here we "
  },
  {
    "path": "examples/Quantum Circuit Example Old.ipynb",
    "chars": 1895470,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"88a252c0-f749-48dd-a738-c0066c8b0551\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "examples/Quantum Circuit Example.ipynb",
    "chars": 756096,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f635c2e7-0471-4964-a374-1ab537ae6762\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "examples/benchmarks/cubic_6x6x10.json",
    "chars": 35793,
    "preview": "{\n  \"inputs\": [\n    [\n      \"a\",\n      \"b\",\n      \"c\"\n    ],\n    [\n      \"d\",\n      \"e\",\n      \"c\",\n      \"f\"\n    ],\n   "
  },
  {
    "path": "examples/benchmarks/mps_mpo_L100_chi64_D5.json",
    "chars": 12781,
    "preview": "{\n  \"inputs\": [\n    [\n      \"a\",\n      \"b\"\n    ],\n    [\n      \"a\",\n      \"c\",\n      \"d\"\n    ],\n    [\n      \"c\",\n      \"e"
  },
  {
    "path": "examples/benchmarks/peps_cluster_r2_D10_a.json",
    "chars": 3015,
    "preview": "{\n  \"inputs\": [\n    [\n      \"a\",\n      \"b\",\n      \"c\",\n      \"d\",\n      \"e\"\n    ],\n    [\n      \"f\",\n      \"g\",\n      \"h\""
  },
  {
    "path": "examples/benchmarks/qucirc_rrzz_n56_d13.json",
    "chars": 38865,
    "preview": "{\n  \"inputs\": [\n    [\n      \"a\",\n      \"b\"\n    ],\n    [\n      \"c\",\n      \"d\"\n    ],\n    [\n      \"e\",\n      \"f\"\n    ],\n  "
  },
  {
    "path": "examples/benchmarks/rand_50_5_a.json",
    "chars": 5682,
    "preview": "{\n  \"inputs\": [\n    [\n      \"b\",\n      \"p\",\n      \"v\",\n      \"Ä\",\n      \"Ð\",\n      \"à\",\n      \"æ\"\n    ],\n    [\n      \"b\""
  },
  {
    "path": "examples/benchmarks/randreg_200_3_a.json",
    "chars": 12657,
    "preview": "{\n  \"inputs\": [\n    [\n      \"É\",\n      \"Ê\",\n      \"Ë\"\n    ],\n    [\n      \"Þ\",\n      \"þ\",\n      \"Ɛ\"\n    ],\n    [\n      \"E"
  },
  {
    "path": "examples/benchmarks/rtree_100_a.json",
    "chars": 4722,
    "preview": "{\n  \"inputs\": [\n    [\n      \"a\",\n      \"b\"\n    ],\n    [\n      \"c\"\n    ],\n    [\n      \"d\"\n    ],\n    [\n      \"e\",\n      \""
  },
  {
    "path": "examples/benchmarks/sycamore_n53_m20_s0_e0_pABCDCDAB.json",
    "chars": 30265,
    "preview": "{\n  \"inputs\": [\n    [\n      \"a\",\n      \"b\",\n      \"c\",\n      \"d\"\n    ],\n    [\n      \"e\",\n      \"f\",\n      \"g\",\n      \"h\""
  },
  {
    "path": "examples/circuit_n53_m10_s0_e0_pABCDCDAB.qsim",
    "chars": 42045,
    "preview": "53\n0 hz_1_2 0 \n0 x_1_2 1 \n0 x_1_2 2 \n0 hz_1_2 3 \n0 y_1_2 4 \n0 hz_1_2 5 \n0 hz_1_2 6 \n0 hz_1_2 7 \n0 hz_1_2 8 \n0 x_1_2 9 \n0"
  },
  {
    "path": "examples/circuit_n53_m12_s0_e0_pABCDCDAB.qsim",
    "chars": 50412,
    "preview": "53\n0 hz_1_2 0 \n0 x_1_2 1 \n0 x_1_2 2 \n0 hz_1_2 3 \n0 y_1_2 4 \n0 hz_1_2 5 \n0 hz_1_2 6 \n0 hz_1_2 7 \n0 hz_1_2 8 \n0 x_1_2 9 \n0"
  },
  {
    "path": "examples/circuit_n53_m20_s0_e0_pABCDCDAB.qsim",
    "chars": 83874,
    "preview": "53\n0 hz_1_2 0 \n0 x_1_2 1 \n0 x_1_2 2 \n0 hz_1_2 3 \n0 y_1_2 4 \n0 hz_1_2 5 \n0 hz_1_2 6 \n0 hz_1_2 7 \n0 hz_1_2 8 \n0 x_1_2 9 \n0"
  },
  {
    "path": "examples/ex_jax.py",
    "chars": 2605,
    "preview": "\"\"\"This script shows how to manually use jax to jit-compile the core\ncontraction.\n\"\"\"\n\nfrom concurrent.futures import Th"
  },
  {
    "path": "examples/ex_mpi_executor.py",
    "chars": 1787,
    "preview": "\"\"\"This script illustrates how to parallelize both the contraction path\nfinding and sliced contraction computation using"
  },
  {
    "path": "examples/ex_mpi_spmd.py",
    "chars": 1578,
    "preview": "\"\"\"This script illustrates how to parallelize both the contraction path\nfinding and sliced contraction computation using"
  },
  {
    "path": "pyproject.toml",
    "chars": 6101,
    "preview": "[project]\nname = \"cotengra\"\ndescription = \"Hyper optimized contraction trees for large tensor networks and einsums.\"\nrea"
  },
  {
    "path": "tests/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/test_backends.py",
    "chars": 3397,
    "preview": "\"\"\"Cross-backend correctness tests for ``ContractionTree.contract``.\n\nEach backend is exercised across the cartesian pro"
  },
  {
    "path": "tests/test_compressed.py",
    "chars": 3022,
    "preview": "import pytest\n\nimport cotengra as ctg\n\n\ndef test_compressed_greedy():\n    chi = 4\n    inputs, output, _, size_dict = ctg"
  },
  {
    "path": "tests/test_compute.py",
    "chars": 8044,
    "preview": "import numpy as np\nimport pytest\nfrom numpy.testing import assert_allclose\n\nimport cotengra as ctg\n\n# these are taken fr"
  },
  {
    "path": "tests/test_hypergraph.py",
    "chars": 1675,
    "preview": "import pytest\n\nimport cotengra as ctg\n\n\ndef test_shortest_distances():\n    inputs, output, _, size_dict = ctg.utils.latt"
  },
  {
    "path": "tests/test_interface.py",
    "chars": 4844,
    "preview": "import numpy as np\nimport pytest\n\nimport cotengra as ctg\n\n\n@pytest.mark.parametrize(\"optimize_type\", [\"preset\", \"list\", "
  },
  {
    "path": "tests/test_optimizers.py",
    "chars": 20517,
    "preview": "import subprocess\nfrom collections import defaultdict\n\nimport autoray as ar\nimport numpy as np\nimport pytest\n\nimport cot"
  },
  {
    "path": "tests/test_parallel.py",
    "chars": 7472,
    "preview": "\"\"\"Tests for parallel pool management behavior.\"\"\"\n\nimport concurrent.futures\nimport multiprocessing\nimport os\n\nimport p"
  },
  {
    "path": "tests/test_paths_basic.py",
    "chars": 6640,
    "preview": "import numpy as np\nimport pytest\nfrom numpy.testing import assert_allclose\n\nimport cotengra as ctg\nimport cotengra.pathf"
  },
  {
    "path": "tests/test_slicer.py",
    "chars": 880,
    "preview": "import pytest\n\nimport cotengra as ctg\n\n\ndef test_slicer():\n    tree = ctg.utils.rand_tree(30, 5, seed=42, d_max=3)\n    s"
  },
  {
    "path": "tests/test_tree.py",
    "chars": 19214,
    "preview": "import pytest\n\nimport cotengra as ctg\n\n\n@pytest.mark.parametrize(\"nodeops\", [\"frozenset[int]\", \"BitSetInt\", \"ssa\"])\ndef "
  }
]

About this extraction

This page contains the full source code of the jcmgray/cotengra GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 118 files (8.3 MB), approximately 2.2M tokens, and a symbol index with 1396 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!