Full Code of gameltb/ComfyUI_stable_fast for AI

main a3422e852077 cached
23 files
213.7 KB
48.9k tokens
262 symbols
1 requests
Download .txt
Showing preview only (223K chars total). Download the full file or copy to clipboard to get everything.
Repository: gameltb/ComfyUI_stable_fast
Branch: main
Commit: a3422e852077
Files: 23
Total size: 213.7 KB

Directory structure:
gitextract_h6p_bfmr/

├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── module/
│   ├── comfy_trace/
│   │   ├── model_base.py
│   │   ├── nodes_freelunch.py
│   │   ├── nodes_model_downscale.py
│   │   ├── openaimodel.py
│   │   └── sd.py
│   ├── comfy_trace_utilities.py
│   ├── controlnet_tensorrt.py
│   ├── model_base_tensorrt.py
│   ├── onnx_module_refit.py
│   ├── openaimodel_tensorrt.py
│   ├── patched_onnx_export/
│   │   └── utils_2_4_0.py
│   ├── sd_tensorrt.py
│   ├── sfast_pipeline_compiler.py
│   ├── tensorrt_utilities.py
│   └── tensorrt_wrapper.py
├── node.py
├── requirements.txt
├── tensorrt_node.py
└── tests/
    └── workflow.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
repo/
tensorrt_engine_cache/
refit_info/

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2024 gameltb

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# ComfyUI_stable_fast

Experimental usage of [stable-fast](https://github.com/chengzeyi/stable-fast) and TensorRT.

> [!NOTE]
>
> Official TensorRT node https://github.com/comfyanonymous/ComfyUI_TensorRT  
> This repo is still experimental, just want to try TensorRT that doesn't need to be compiled repeatedly.

[Speed Test](#speed-test)

# Update

- 2024-07-31 : Unfortunately, using the same engine on different models will result in a slight variation in the results or complete unusability. Added an option to allow building dedicated engines for different models. However, some models still have different outputs than PyTorch.
- 2024-07-29 : significantly improved performance of starting and switching TensorRT models when there is an engine cache on PyTorch 2.4.0. add WEIGHT_STREAMING support, you can run SDXL on 6GB device with TensorRT. However, the engine unloading caused by VAE decoding can greatly slow down the overall generation speed.

# Installation

```bash
git clone https://github.com/gameltb/ComfyUI_stable_fast custom_nodes/ComfyUI_stable_fast
```

## stable-fast

You'll need to follow the guide below to enable stable fast node.

[stable-fast installation](https://github.com/chengzeyi/stable-fast?tab=readme-ov-file#installation)

> [!NOTE]
>
> Requires stable-fast >= 1.0.0 .

## TensorRT(testing)

> [!NOTE]
>
> Currently only tested on linux, Not tested on Windows.

The following needs to be installed when you use TensorRT.

```bash
pip install onnx zstandard onnxscript --upgrade
pip install --pre --upgrade --extra-index-url https://pypi.nvidia.com tensorrt==10.2.0
pip install onnx-graphsurgeon polygraphy --extra-index-url https://pypi.ngc.nvidia.com
```

## Usage

Please refer to the [screenshot](#screenshot)

## stable-fast

It can work with Lora, ControlNet and lcm. SD1.5 and SSD-1B are supported. SDXL should work.  
Run ComfyUI with `--disable-cuda-malloc` may be possible to optimize the speed further.

> [!NOTE]
>
> - FreeU and PatchModelAddDownscale are now supported experimentally, Just use the comfy node normally.
> - stable fast not work well with accelerate, So this node has no effect when the vram is low. For example: 6G vram card run SDXL.
> - stable fast will optimize the speed when generating images using the same model for the second time. if you switch models or Lora frequently, please consider disable enable_cuda_graph.
> - **It is better to connect the `Apply StableFast Unet` node directly to the `KSampler` node, and there should be no nodes between them that will change the weight, such as the `Load LoRA` node, but for some nodes, placing it between them can prevent useless recompilation caused by modifying the node parameters, such as the `FreeU` node, you can try to use other nodes, but I can't guarantee that it will work properly.**

## TensorRT

Run ComfyUI with `--disable-xformers --force-fp16 --fp16-vae` and use `Apply TensorRT Unet` like `Apply StableFast Unet`.  
The Engine will be cached in `tensorrt_engine_cache`.

> [!NOTE]
>
> - If you encounter an error after updating, you can try deleting the `tensorrt_engine_cache`.

### Apply TensorRT Unet Node

- enable_cuda_graph
  - With or without CUDA Graph, this should make it slightly faster, but at the moment there is a problem with the implementation and this has no effect. Also, even if it works, it won't work with WEIGHT_STREAMING.
- patch_type
  - `UNET` compiles the whole unet as a model, and it's faster. However, some nodes are unusable because TensorRT does not support some operations in PyTorch, such as FreeU nodes. Also, if you don't have enough video memory to put down the entire model, you'll need to select this option to use TensorRT, otherwise it's likely to be slower than running directly.
  - `UNET_BLOCK` splits unet into several small models to allow pytorch to perform operations between them that TensorRT does not support. It takes quite a bit of time to compile and load, but the speed of completion is not much compared to `UNET`. It may not be acceptable to use this option most of the time.
- keep_width
- keep_height
- keep_batch_size
- keep_embedding_block
  - The parameters starting with `keep_` above are used when building the engine, and they specify the maximum value of the parameters that the engine accepts. At the same time, the node will look up the cached engine based on these values, so if you want to build the engine as few times as possible, keep a fixed set of values based on different types of models such as sd15 or sdxl. If one of the parameters you use is greater than them, it will trigger the build. embedding_block is related to the length of your prompt, and the longer the length, the greater the value.
- use_dedicated_engine
  - building dedicated engines for different models.

When you use ControlNet, different control image sizes will cause the engine to compile for now.

# Table

## Features

|                  | Stable Fast           | TensorRT(UNET) | TensorRT(UNET_BLOCK) |
| ---------------- | --------------------- | -------------- | -------------------- |
| SD1.5            | ✓               | ✓        | ✓              |
| SDXL             | untested(Should work) | ✓        | untested             |
| SSD-1B           | ✓               | ✓        | ✓              |
| Lora             | ✓               | ✓        | ✓              |
| ControlNet Unet  | ✓               | ✓        | ✓              |
| VAE decode       | WIP                   | ✓        | -                    |
| ControlNet Model | WIP                   | WIP            | -                    |

## Nodes Tested

|                        | Stable Fast | TensorRT(UNET) | TensorRT(UNET_BLOCK) |
| ---------------------- | ----------- | -------------- | -------------------- |
| Load LoRA              | ✓     | ✓        | ✓              |
| FreeU(FreeU_V2)        | ✓     | ✗        | ✓              |
| PatchModelAddDownscale | ✓     | WIP            | ✓              |

## Speed Test

### GeForce RTX 3060 Mobile

GeForce RTX 3060 Mobile (80W) 6GB, Linux , torch 2.1.1, stable fast 0.0.14, tensorrt 9.2.0.post12.dev5, xformers 0.0.23.  
[workflow](./tests/workflow.json): SD1.5, 512x512 bantch_size 1, euler_ancestral karras, 20 steps, use fp16.

Test Stable Fast and xformers run ComfyUI with `--disable-cuda-malloc`.  
Test TensorRT and pytorch run ComfyUI with `--disable-xformers`.

###### TensorRT Note

For the TensorRT first launch, it will take up to 10 minutes to build the engine; with timing cache, it will reduce to about 2–3 minutes; with engine cache, it will reduce to about 20–30 seconds for now.

#### Avg it/s

|                                  | Stable Fast (enable_cuda_graph) | TensorRT (UNET) | TensorRT (UNET_BLOCK) | pytorch cross attention | xformers |
| -------------------------------- | ------------------------------- | --------------- | --------------------- | ----------------------- | -------- |
|                                  | 10.10 it/s                      | 10.95it/s       | 10.66it/s             | 7.02it/s                | 7.90it/s |
| enable FreeU                     | 9.42 it/s                       | ✗         | 10.04it/s             | 6.75it/s                | 7.54it/s |
| enable Patch Model Add Downscale | 10.81 it/s                      | ✗         | 11.30it/s             | 7.46it/s                | 8.41it/s |

#### Avg time spent

| workflow                         | Stable Fast (enable_cuda_graph) | TensorRT (UNET) | TensorRT (UNET_BLOCK) | pytorch cross attention | xformers |
| -------------------------------- | ------------------------------- | --------------- | --------------------- | ----------------------- | -------- |
|                                  | 2.21s (first 17s)               | 2.05s           | 2.10s                 | 3.06s                   | 2.76s    |
| enable FreeU                     | 2.35s (first 18.5s)             | ✗         | 2.24s                 | 3.18s                   | 2.88     |
| enable Patch Model Add Downscale | 2.08s (first 31.37s)            | ✗         | 2.03s                 | 2.89s                   | 2.61s    |

# Screenshot

![sd1.5](asset/scr.png)
![ssd-1b](asset/scr1.png)


================================================
FILE: __init__.py
================================================
import traceback 
import sys 

NODE_CLASS_MAPPINGS = {}

# A dictionary that contains the friendly/humanly readable titles for the nodes
NODE_DISPLAY_NAME_MAPPINGS = {}

try:
    from .node import ApplyStableFastUnet

    SF_NODE_CLASS_MAPPINGS = {
        "ApplyStableFastUnet": ApplyStableFastUnet,
    }

    SF_NODE_DISPLAY_NAME_MAPPINGS = {
        "ApplyStableFastUnet": "Apply StableFast Unet",
    }
    NODE_CLASS_MAPPINGS.update(SF_NODE_CLASS_MAPPINGS)
    NODE_DISPLAY_NAME_MAPPINGS.update(SF_NODE_DISPLAY_NAME_MAPPINGS)
except Exception as e:
    print("ComfyUI_stable_fast: StableFast node import failed.")
    traceback.print_exception(*sys.exc_info()) 

try:
    from .tensorrt_node import (
        ApplyTensorRTControlNet,
        ApplyTensorRTUnet,
        ApplyTensorRTVaeDecoder,
    )

    TRT_NODE_CLASS_MAPPINGS = {
        "ApplyTensorRTUnet": ApplyTensorRTUnet,
        "ApplyTensorRTVaeDecoder": ApplyTensorRTVaeDecoder,
        "ApplyTensorRTControlNet": ApplyTensorRTControlNet,
    }
    TRT_NODE_DISPLAY_NAME_MAPPINGS = {
        "ApplyTensorRTUnet": "Apply TensorRT Unet",
        "ApplyTensorRTVaeDecoder": "Apply TensorRT VaeDecoder",
        "ApplyTensorRTControlNet": "Apply TensorRT ControlNet",
    }
    NODE_CLASS_MAPPINGS.update(TRT_NODE_CLASS_MAPPINGS)
    NODE_DISPLAY_NAME_MAPPINGS.update(TRT_NODE_DISPLAY_NAME_MAPPINGS)
except Exception as e:
    print("ComfyUI_stable_fast: tensorrt_node import failed.")
    traceback.print_exception(*sys.exc_info()) 

if len(NODE_CLASS_MAPPINGS) == 0:
    raise Exception("import failed")

================================================
FILE: module/comfy_trace/model_base.py
================================================
import contextlib

import torch

from ..comfy_trace_utilities import ModuleFactory, hash_arg
from .nodes_freelunch import FreeU, FreeU_V2
from .nodes_model_downscale import (
    PatchModelAddDownscale_input_block_patch,
    PatchModelAddDownscale_output_block_patch,
)
from .openaimodel import PatchUNetModel

PATCH_PATCH_MAP = {
    "FreeU.patch.<locals>.output_block_patch": FreeU,
    "FreeU_V2.patch.<locals>.output_block_patch": FreeU_V2,
    "PatchModelAddDownscale.patch.<locals>.input_block_patch": PatchModelAddDownscale_input_block_patch,
    "PatchModelAddDownscale.patch.<locals>.output_block_patch": PatchModelAddDownscale_output_block_patch,
}


class BaseModelApplyModelModule(torch.nn.Module):
    def __init__(self, func, module):
        super().__init__()
        self.func = func
        self.module = module

    def forward(
        self,
        input_x,
        timestep,
        c_concat=None,
        c_crossattn=None,
        y=None,
        control=None,
        transformer_options={},
    ):
        kwargs = {"y": y}

        new_transformer_options = {}
        if "patches" in transformer_options:
            new_transformer_options["patches"] = transformer_options["patches"]

        return self.func(
            input_x,
            timestep,
            c_concat=c_concat,
            c_crossattn=c_crossattn,
            control=control,
            transformer_options=new_transformer_options,
            **kwargs,
        )


class BaseModelApplyModelModuleFactory(ModuleFactory):
    kwargs_name = (
        "input_x",
        "timestep",
        "c_concat",
        "c_crossattn",
        "y",
        "control",
    )

    def __init__(self, callable, kwargs) -> None:
        self.callable = callable
        self.unet_config = callable.__self__.model_config.unet_config
        self.kwargs = kwargs
        self.patch_module = {}
        self.patch_module_parameter = {}
        self.converted_kwargs = self.gen_converted_kwargs()

    def gen_converted_kwargs(self):
        converted_kwargs = {}
        for arg_name, arg in self.kwargs.items():
            if arg_name in self.kwargs_name:
                converted_kwargs[arg_name] = arg

        transformer_options = self.kwargs.get("transformer_options", {})
        patches = transformer_options.get("patches", {})

        patch_module = {}
        patch_module_parameter = {}

        for patch_type_name, patch_list in patches.items():
            patch_module[patch_type_name] = []
            patch_module_parameter[patch_type_name] = []
            for patch in patch_list:
                if patch.__qualname__ in PATCH_PATCH_MAP:
                    patch, parameter = PATCH_PATCH_MAP[patch.__qualname__].from_closure(
                        patch, transformer_options
                    )
                    patch_module[patch_type_name].append(patch)
                    patch_module_parameter[patch_type_name].append(parameter)
                    # output_block_patch_module.append(torch.jit.script(patch))
                else:
                    print(f"\33[93mWarning: Ignore patch {patch.__qualname__}.\33[0m")

        new_transformer_options = {}
        new_transformer_options["patches"] = patch_module_parameter
        if len(new_transformer_options["patches"]) > 0:
            converted_kwargs["transformer_options"] = new_transformer_options

        self.patch_module = patch_module
        self.patch_module_parameter = patch_module_parameter
        return converted_kwargs

    def gen_cache_key(self):
        key_kwargs = {}
        for k, v in self.converted_kwargs.items():
            if k == "transformer_options":
                nv = {}
                for tk, tv in v.items():
                    if tk not in ("patches",):  # ,"cond_or_uncond"
                        nv[tk] = tv
                v = nv
            key_kwargs[k] = v

        patch_module_cache_key = {}
        for patch_type_name, patch_list in self.patch_module.items():
            patch_module_cache_key[patch_type_name] = []
            for patch in patch_list:
                patch_module_cache_key[patch_type_name].append(patch.gen_cache_key())

        return (
            self.callable.__class__.__qualname__,
            hash_arg(self.unet_config),
            hash_arg(key_kwargs),
            hash_arg(patch_module_cache_key),
        )

    @contextlib.contextmanager
    def converted_module_context(self):
        module = BaseModelApplyModelModule(self.callable, self.callable.__self__)

        if len(self.patch_module) > 0:
            self.callable.__self__.diffusion_model = PatchUNetModel.cast_from(
                self.callable.__self__.diffusion_model
            )
            try:
                self.callable.__self__.diffusion_model.set_patch_module(
                    self.patch_module
                )

                yield (module, self.converted_kwargs)
            finally:
                self.callable.__self__.diffusion_model = (
                    self.callable.__self__.diffusion_model.cast_to_base_model()
                )
        else:
            yield (module, self.converted_kwargs)


class UNetModelModule(torch.nn.Module):
    def __init__(self, module):
        super().__init__()
        self.module = module

    def forward(
        self,
        x,
        timesteps=None,
        context=None,
        y=None,
        control=None,
        transformer_options={},
        **kwargs,
    ):
        new_transformer_options = {}
        if "patches" in transformer_options:
            new_transformer_options["patches"] = transformer_options["patches"]

        return self.module(
            x,
            timesteps=timesteps,
            context=context,
            y=y,
            control=control,
            transformer_options=new_transformer_options,
            **kwargs,
        )


class UNetModelModuleFactory(ModuleFactory):
    kwargs_name = (
        "x",
        "timesteps",
        "context",
        "y",
        "control",
    )

    def __init__(self, diffusion_model, unet_config, **kwargs) -> None:
        self.diffusion_model = diffusion_model
        self.unet_config = unet_config
        self.kwargs = kwargs
        self.patch_module = {}
        self.patch_module_parameter = {}
        self.converted_kwargs = self.gen_converted_kwargs()

    def gen_converted_kwargs(self):
        converted_kwargs = {}
        for arg_name, arg in self.kwargs.items():
            if arg_name in self.kwargs_name:
                converted_kwargs[arg_name] = arg

        transformer_options = self.kwargs.get("transformer_options", {})
        patches = transformer_options.get("patches", {})

        patch_module = {}
        patch_module_parameter = {}

        for patch_type_name, patch_list in patches.items():
            patch_module[patch_type_name] = []
            patch_module_parameter[patch_type_name] = []
            for patch in patch_list:
                if patch.__qualname__ in PATCH_PATCH_MAP:
                    patch, parameter = PATCH_PATCH_MAP[patch.__qualname__].from_closure(
                        patch, transformer_options
                    )
                    patch_module[patch_type_name].append(patch)
                    patch_module_parameter[patch_type_name].append(parameter)
                    # output_block_patch_module.append(torch.jit.script(patch))
                else:
                    print(f"\33[93mWarning: Ignore patch {patch.__qualname__}.\33[0m")

        new_transformer_options = {}
        new_transformer_options["patches"] = patch_module_parameter
        if len(new_transformer_options["patches"]) > 0:
            converted_kwargs["transformer_options"] = new_transformer_options

        self.patch_module = patch_module
        self.patch_module_parameter = patch_module_parameter
        return converted_kwargs

    def gen_cache_key(self):
        key_kwargs = {}
        for k, v in self.converted_kwargs.items():
            if k == "transformer_options":
                nv = {}
                for tk, tv in v.items():
                    if tk not in ("patches",):  # ,"cond_or_uncond"
                        nv[tk] = tv
                v = nv
            key_kwargs[k] = v

        patch_module_cache_key = {}
        for patch_type_name, patch_list in self.patch_module.items():
            patch_module_cache_key[patch_type_name] = []
            for patch in patch_list:
                patch_module_cache_key[patch_type_name].append(patch.gen_cache_key())

        return (
            self.diffusion_model.__class__.__qualname__,
            hash_arg(self.unet_config),
            hash_arg(key_kwargs),
            hash_arg(patch_module_cache_key),
        )

    @contextlib.contextmanager
    def converted_module_context(self):
        module = UNetModelModule(self.diffusion_model)

        if len(self.patch_module) > 0:
            diffusion_model = PatchUNetModel.cast_from(self.diffusion_model)
            try:
                diffusion_model.set_patch_module(self.patch_module)

                yield (module, self.converted_kwargs)
            finally:
                diffusion_model = diffusion_model.cast_to_base_model()
        else:
            yield (module, self.converted_kwargs)


================================================
FILE: module/comfy_trace/nodes_freelunch.py
================================================
# code originally taken from: https://github.com/ChenyangSi/FreeU (under MIT License)

import copy

import torch


def Fourier_filter(x, threshold: int, scale: float):
    # FFT
    x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))
    x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))

    B, C, H, W = x_freq.shape
    mask = torch.ones((B, C, H, W), device=x.device)

    crow, ccol = H // 2, W // 2
    mask[
        ..., crow - threshold : crow + threshold, ccol - threshold : ccol + threshold
    ] = scale
    x_freq = x_freq * mask

    # IFFT
    x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))
    x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real

    return x_filtered.to(x.dtype)


class FreeU(torch.nn.Module):
    def __init__(self, scale_map):
        super().__init__()
        self.scale_map = scale_map

    def forward(self, h, hsp, parameter, transformer_options):
        for k, scale in zip(self.scale_map, parameter):
            if k == h.shape[1]:
                h[:, : h.shape[1] // 2] = h[:, : h.shape[1] // 2] * scale[0]
                hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])
        return h, hsp

    @staticmethod
    def from_closure(closure, transformer_options):
        scale_dict = {}
        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):
            if var_name == "scale_dict":
                scale_dict = copy.deepcopy(var.cell_contents)
                break
        return FreeU(list(scale_dict.keys())), torch.Tensor(list(scale_dict.values()))

    def gen_cache_key(self):
        return [self.__class__.__name__, self.scale_map]


class FreeU_V2(torch.nn.Module):
    def __init__(self, scale_map):
        super().__init__()
        self.scale_map = scale_map

    def forward(self, h, hsp, parameter, transformer_options):
        for k, scale in zip(self.scale_map, parameter):
            if k == h.shape[1]:
                hidden_mean = h.mean(1).unsqueeze(1)
                B = hidden_mean.shape[0]
                hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
                hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
                hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (
                    hidden_max - hidden_min
                ).unsqueeze(2).unsqueeze(3)

                h[:, : h.shape[1] // 2] = h[:, : h.shape[1] // 2] * (
                    (scale[0] - 1) * hidden_mean + 1
                )

                hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])

        return h, hsp

    @staticmethod
    def from_closure(closure, transformer_options):
        scale_dict = {}
        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):
            if var_name == "scale_dict":
                scale_dict = copy.deepcopy(var.cell_contents)
                break
        return FreeU_V2(list(scale_dict.keys())), torch.Tensor(
            list(scale_dict.values())
        )

    def gen_cache_key(self):
        return [self.__class__.__name__, self.scale_map]


================================================
FILE: module/comfy_trace/nodes_model_downscale.py
================================================
import comfy.utils
import torch


class PatchModelAddDownscale_input_block_patch(torch.nn.Module):
    def __init__(
        self,
        block_number,
        downscale_method,
        downscale_factor,
        sigma,
        sigma_start,
        sigma_end,
    ):
        super().__init__()
        self.block_number = block_number
        self.downscale_method = downscale_method
        self.downscale_factor = downscale_factor
        self.sigma = sigma
        self.sigma_start = sigma_start
        self.sigma_end = sigma_end

    def forward(self, h, parameter, transformer_options):
        if transformer_options["block"][1] == self.block_number:
            if self.sigma <= self.sigma_start and self.sigma >= self.sigma_end:
                h = comfy.utils.common_upscale(
                    h,
                    round(int(h.shape[-1]) * (1.0 / self.downscale_factor)),
                    round(int(h.shape[-2]) * (1.0 / self.downscale_factor)),
                    self.downscale_method,
                    "disabled",
                )
        return h

    @staticmethod
    def from_closure(closure, transformer_options):
        parameter_dict = {}
        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):
            parameter_dict[var_name] = var.cell_contents

        sigma = transformer_options["sigmas"][0].item()
        return (
            PatchModelAddDownscale_input_block_patch(
                parameter_dict["block_number"],
                parameter_dict["downscale_method"],
                parameter_dict["downscale_factor"],
                sigma,
                parameter_dict["sigma_start"],
                parameter_dict["sigma_end"],
            ),
            (),
        )

    def gen_cache_key(self):
        flag = 0
        if self.sigma <= self.sigma_start and self.sigma >= self.sigma_end:
            flag = 1
        return [
            self.__class__.__name__,
            flag,
            self.block_number,
            self.downscale_method,
            self.downscale_factor,
        ]


class PatchModelAddDownscale_output_block_patch(torch.nn.Module):
    def __init__(self, upscale_method):
        super().__init__()
        self.upscale_method = upscale_method

    def forward(self, h, hsp, parameter, transformer_options):
        if h.shape[2] != hsp.shape[2]:
            h = comfy.utils.common_upscale(
                h,
                int(hsp.shape[-1]),
                int(hsp.shape[-2]),
                self.upscale_method,
                "disabled",
            )
        return h, hsp

    @staticmethod
    def from_closure(closure, transformer_options):
        parameter_dict = {}
        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):
            parameter_dict[var_name] = var.cell_contents
        return (
            PatchModelAddDownscale_output_block_patch(parameter_dict["upscale_method"]),
            (),
        )

    def gen_cache_key(self):
        return [self.__class__.__name__, self.upscale_method]


================================================
FILE: module/comfy_trace/openaimodel.py
================================================
import copy

import torch as th
import torch.nn as nn
from comfy.ldm.modules.diffusionmodules.openaimodel import (
    UNetModel,
    apply_control,
    forward_timestep_embed,
)
from comfy.ldm.modules.diffusionmodules.util import timestep_embedding

origin_forward_timestep_embed = forward_timestep_embed


class ForwardTimestepEmbedModule(th.nn.Module):
    def __init__(self, ts, transformer_options={}, num_video_frames=None):
        super().__init__()
        self.module = ts
        self.transformer_options = transformer_options
        self.num_video_frames = num_video_frames

    def forward(
        self,
        x,
        emb,
        context=None,
        output_shape_tensor=None,
        time_context=None,
        image_only_indicator=None,
    ):
        return origin_forward_timestep_embed(
            self.module,
            x,
            emb,
            context=context,
            transformer_options=self.transformer_options,
            output_shape=output_shape_tensor
            if output_shape_tensor is None
            else output_shape_tensor.shape,
            time_context=time_context,
            num_video_frames=self.num_video_frames,
            image_only_indicator=image_only_indicator,
        )


class PatchUNetModel(UNetModel):
    @staticmethod
    def cast_from(other):
        tcls = UNetModel
        if isinstance(other, tcls):
            other.__class__ = PatchUNetModel
            other.patch_init()
            return other
        raise ValueError(f"instance must be {tcls.__qualname__}")

    def cast_to_base_model(self):
        self.patch_deinit()
        self.__class__ = UNetModel
        return self

    def patch_init(self):
        self.input_block_patch = nn.ModuleList(
            [nn.ModuleList() for _ in self.input_blocks]
        )
        self.input_block_patch_after_skip = nn.ModuleList(
            [nn.ModuleList() for _ in self.input_blocks]
        )
        self.output_block_patch = nn.ModuleList(
            [nn.ModuleList() for _ in self.output_blocks]
        )

    def patch_deinit(self):
        del self.input_block_patch
        del self.input_block_patch_after_skip
        del self.output_block_patch

    def set_patch_module(self, patch_module):
        if "input_block_patch" in patch_module:
            self.input_block_patch = nn.ModuleList(
                [
                    nn.ModuleList(copy.deepcopy(patch_module["input_block_patch"]))
                    for _ in self.input_blocks
                ]
            )
        if "input_block_patch_after_skip" in patch_module:
            self.input_block_patch_after_skip = nn.ModuleList(
                [
                    nn.ModuleList(
                        copy.deepcopy(patch_module["input_block_patch_after_skip"])
                    )
                    for _ in self.input_blocks
                ]
            )
        if "output_block_patch" in patch_module:
            self.output_block_patch = nn.ModuleList(
                [
                    nn.ModuleList(copy.deepcopy(patch_module["output_block_patch"]))
                    for _ in self.output_blocks
                ]
            )

    def forward(
        self,
        x,
        timesteps=None,
        context=None,
        y=None,
        control=None,
        transformer_options={},
        **kwargs,
    ):
        """
        Apply the model to an input batch.
        :param x: an [N x C x ...] Tensor of inputs.
        :param timesteps: a 1-D batch of timesteps.
        :param context: conditioning plugged in via crossattn
        :param y: an [N] Tensor of labels, if class-conditional.
        :return: an [N x C x ...] Tensor of outputs.
        """
        transformer_options["original_shape"] = list(x.shape)
        transformer_options["current_index"] = 0
        transformer_patches = transformer_options.get("patches", {})

        num_video_frames = kwargs.get("num_video_frames", self.default_num_video_frames)
        image_only_indicator = kwargs.get("image_only_indicator", None)
        time_context = kwargs.get("time_context", None)

        assert (y is not None) == (
            self.num_classes is not None
        ), "must specify y if and only if the model is class-conditional"
        hs = []
        t_emb = timestep_embedding(
            timesteps, self.model_channels, repeat_only=False
        ).to(self.dtype)
        emb = self.time_embed(t_emb)

        if self.num_classes is not None:
            assert y.shape[0] == x.shape[0]
            emb = emb + self.label_emb(y)

        h = x.type(self.dtype)
        for id, module in enumerate(self.input_blocks):
            transformer_options["block"] = ("input", id)
            h = forward_timestep_embed(
                module,
                h,
                emb,
                context,
                transformer_options,
                time_context=time_context,
                num_video_frames=num_video_frames,
                image_only_indicator=image_only_indicator,
            )
            h = apply_control(h, control, "input")

            for patch_id, input_block_patch_module in enumerate(
                self.input_block_patch[id]
            ):
                h = input_block_patch_module(
                    h,
                    transformer_patches.get("input_block_patch")[patch_id],
                    transformer_options,
                )

            hs.append(h)

            for patch_id, input_block_patch_after_skip_module in enumerate(
                self.input_block_patch_after_skip[id]
            ):
                h = input_block_patch_after_skip_module(
                    h,
                    transformer_patches.get("input_block_patch_after_skip")[patch_id],
                    transformer_options,
                )

        transformer_options["block"] = ("middle", 0)
        h = forward_timestep_embed(
            self.middle_block,
            h,
            emb,
            context,
            transformer_options,
            time_context=time_context,
            num_video_frames=num_video_frames,
            image_only_indicator=image_only_indicator,
        )
        h = apply_control(h, control, "middle")

        for id, module in enumerate(self.output_blocks):
            transformer_options["block"] = ("output", id)
            hsp = hs.pop()
            hsp = apply_control(hsp, control, "output")

            for patch_id, output_block_patch_module in enumerate(
                self.output_block_patch[id]
            ):
                h, hsp = output_block_patch_module(
                    h,
                    hsp,
                    transformer_patches.get("output_block_patch")[patch_id],
                    transformer_options,
                )

            h = th.cat([h, hsp], dim=1)
            del hsp
            if len(hs) > 0:
                output_shape = hs[-1].shape
            else:
                output_shape = None
            h = forward_timestep_embed(
                module,
                h,
                emb,
                context,
                transformer_options,
                output_shape,
                time_context=time_context,
                num_video_frames=num_video_frames,
                image_only_indicator=image_only_indicator,
            )
        h = h.type(x.dtype)
        if self.predict_codebook_ids:
            return self.id_predictor(h)
        else:
            return self.out(h)


================================================
FILE: module/comfy_trace/sd.py
================================================
import torch


class VAEDecodeModule(torch.nn.Module):
    def __init__(self, module, decode):
        super().__init__()
        self.module = module
        self.decode = decode

    def forward(self, samples):
        return self.decode(samples)


================================================
FILE: module/comfy_trace_utilities.py
================================================
import contextlib
import copy

import torch


def hash_arg(arg):
    # micro optimization: bool obj is an instance of int
    if isinstance(arg, (str, int, float, bytes)):
        return arg
    if isinstance(arg, (tuple, list)):
        return tuple(map(hash_arg, arg))
    if isinstance(arg, dict):
        return tuple(
            sorted(
                ((hash_arg(k), hash_arg(v)) for k, v in arg.items()), key=lambda x: x[0]
            )
        )
    if isinstance(arg, torch.dtype):
        return str(arg)

    return type(arg)


class ModuleWrapper(torch.nn.Module):
    def __init__(self, module):
        super().__init__()
        self.module = module

    def forward(self, *args, **kwargs):
        return self.module(*args, **kwargs)


class ModuleFactory:
    def __init__(self, callable, kwargs) -> None:
        self.callable = callable
        self.kwargs = kwargs
        self.converted_kwargs = self.gen_converted_kwargs()

    def gen_converted_kwargs(self):
        return self.kwargs

    def get_converted_kwargs(self):
        return self.converted_kwargs

    def gen_cache_key(self):
        return (
            self.callable.__class__.__qualname__,
            hash_arg(self.kwargs),
        )

    @contextlib.contextmanager
    def converted_module_context(self):
        yield (self.callable, self.converted_kwargs)

    def load_state_dict_to_module(self, script_module):
        with self.converted_module_context() as (m_model, m_kwargs):
            script_module.load_state_dict(
                m_model.state_dict(), strict=False, assign=True
            )
        return script_module


class TracerWithCache:
    cache_map = {}

    @staticmethod
    def get_traced_module(module_factory: ModuleFactory, device=None):
        cache_key = module_factory.gen_cache_key()

        if not cache_key in TracerWithCache.cache_map:
            with module_factory.converted_module_context() as (m_model, m_kwargs):
                if device != None:
                    m_model.to(device=device)
                script_module = torch.jit.trace(
                    m_model,
                    example_kwarg_inputs=m_kwargs,
                    strict=True,
                    check_trace=True,
                )

            meta_script_module = script_module.to_empty(device="meta")
            TracerWithCache.cache_map[cache_key] = meta_script_module

        meta_script_module = copy.deepcopy(TracerWithCache.cache_map[cache_key])

        script_module = module_factory.load_state_dict_to_module(meta_script_module)
        return script_module


================================================
FILE: module/controlnet_tensorrt.py
================================================
from .tensorrt_wrapper import CallableTensorRTEngineWrapper


class CallableTensorRTEngineWrapperDynamicShapeControlNet(
    CallableTensorRTEngineWrapper
):
    args_name = ["x", "hint", "timesteps", "context", "y"]

    def gen_onnx_args(self, kwargs, module=None):
        args_name = []
        args = []
        for arg_name in self.args_name:
            args.append(kwargs.get(arg_name, None))
            if args[-1] != None:
                args_name.append(arg_name)
        dynamic_axes = {
            "x": {0: "B", 2: "H", 3: "W"},
            "hint": {0: "HB", 2: "8H", 3: "8W"},
            "timesteps": {0: "B"},
            "context": {0: "B", 1: "77E"},
        }
        for k in list(dynamic_axes.keys()):
            if not k in args_name:
                dynamic_axes.pop(k)
        return args, args_name, dynamic_axes

    def gen_tensorrt_args(self, kwargs):
        input_shape_info = {}
        feed_dict = {}
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg != None:
                feed_dict[arg_name] = arg
                input_shape_info[arg_name] = tuple(arg.shape)

        return feed_dict, input_shape_info

    def gen_tensorrt_args_profile(self, input_shape_info):
        min_input_profile_info = {
            "x": {0: 1, 2: 8, 3: 8},
            "hint": {0: 1, 2: 64, 3: 64},
            "timesteps": {0: 1},
            "context": {0: 1, 1: 77},
        }
        input_profile_info = {}
        for arg_name, shape_info in input_shape_info.items():
            min_shape_config = min_input_profile_info.get(arg_name, None)
            min_shape_info = list(shape_info)
            if min_shape_config != None:
                for k, v in min_shape_config.items():
                    min_shape_info[k] = v
            input_profile_info[arg_name] = [
                tuple(min_shape_info),
                shape_info,
                shape_info,
            ]

        return input_profile_info

    def gen_onnx_outputs(self, module):
        outputs_name = []
        for i in range(len(module.input_blocks) + 1):
            outputs_name.append(f"output_{i}")
        self.outputs_name = outputs_name
        return outputs_name

    def gen_tensorrt_outputs(self, output_map):
        output = []
        for output_name in self.outputs_name:
            output.append(output_map[output_name])
        return output


================================================
FILE: module/model_base_tensorrt.py
================================================
import torch

from .tensorrt_wrapper import CallableTensorRTEngineWrapper


class CallableTensorRTEngineWrapperDynamicShapeBaseModelApplyModel(
    CallableTensorRTEngineWrapper
):
    args_name = [
        "input_x",
        "timestep",
        "c_concat",
        "c_crossattn",
        "y",
        "control",
    ]

    def gen_onnx_args(self, kwargs, module=None):
        dynamic_axes = {
            "input_x": {0: "B", 2: "H", 3: "W"},
            "timestep": {0: "B"},
            "c_crossattn": {0: "B", 1: "E"},
            "y": {0: "B"},
        }
        args_name = []
        args = []
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg is not None or not isinstance(
                module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)
            ):
                args.append(arg)
                if arg is not None:
                    if arg_name == "control":
                        control_params = arg
                        for key in control_params:
                            for i, v in enumerate(control_params[key]):
                                control_params_name = f"{arg_name}_{key}_{i}"
                                args_name.append(control_params_name)
                                dynamic_axes[control_params_name] = {
                                    0: "B",
                                    2: f"{control_params_name}_H",
                                    3: f"{control_params_name}_W",
                                }
                    else:
                        args_name.append(arg_name)
        if not isinstance(module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)):
            args.append({})
        for k in list(dynamic_axes.keys()):
            if not k in args_name:
                dynamic_axes.pop(k)
        return args, args_name, dynamic_axes

    def gen_tensorrt_args(self, kwargs):
        input_shape_info = {}
        feed_dict = {}
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg != None:
                if arg_name == "control":
                    control_params = arg
                    for key in control_params:
                        for i, v in enumerate(control_params[key]):
                            control_params_name = f"{arg_name}_{key}_{i}"
                            feed_dict[control_params_name] = v
                            input_shape_info[control_params_name] = tuple(v.shape)
                else:
                    feed_dict[arg_name] = arg
                    input_shape_info[arg_name] = tuple(arg.shape)

        return feed_dict, input_shape_info

    def gen_tensorrt_args_profile(self, input_shape_info):
        min_input_profile_info = {
            "input_x": {0: 1, 2: 2, 3: 2},
            "timestep": {0: 1},
            "c_crossattn": {0: 1, 1: 77},
            "y": {0: 1},
        }
        input_profile_info = {}
        for arg_name, shape_info in input_shape_info.items():
            if arg_name.startswith("control"):
                min_shape_config = {0: 1, 2: 1, 3: 1}
            else:
                min_shape_config = min_input_profile_info.get(arg_name, None)
            min_shape_info = list(shape_info)
            if min_shape_config != None:
                for k, v in min_shape_config.items():
                    min_shape_info[k] = v
            input_profile_info[arg_name] = [
                tuple(min_shape_info),
                shape_info,
                shape_info,
            ]

        return input_profile_info


================================================
FILE: module/onnx_module_refit.py
================================================
import logging
from collections import OrderedDict
from dataclasses import asdict, dataclass

import onnx
import torch
from onnx import helper, numpy_helper

_logger = logging.getLogger(__name__)


@dataclass
class ParamsDictGenMapValue:
    op: str
    args: list


def make_module_onnx_tensor_gen_map_by_params_dict(
    module: torch.nn.Module, params_dict: dict[str, torch.Tensor]
):
    params_dict_gen_map = {}

    params_dict_dataptr_map = {v.data_ptr(): k for k, v in params_dict.items()}

    not_found_state_dict_list = []
    for k, v in module.state_dict().items():
        if v.data_ptr() in params_dict_dataptr_map:
            params_dict_key = params_dict_dataptr_map[v.data_ptr()]
            assert params_dict_key not in params_dict_gen_map
            if params_dict[params_dict_key].shape == v.shape:
                params_dict_gen_map[params_dict_key] = asdict(
                    ParamsDictGenMapValue("rename", [k])
                )
                # torch.testing.assert_close()
            elif params_dict[params_dict_key].squeeze().shape == v.shape:
                params_dict_gen_map[params_dict_key] = asdict(
                    ParamsDictGenMapValue(
                        "reshape", [k, list(params_dict[params_dict_key].shape)]
                    )
                )
                # torch.testing.assert_close()
            elif params_dict[params_dict_key].transpose(0, 1).shape == v.shape:
                params_dict_gen_map[params_dict_key] = asdict(
                    ParamsDictGenMapValue("transpose", [k, [0, 1]])
                )
                # torch.testing.assert_close()
            else:
                assert False, (
                    k,
                    v.shape,
                    params_dict_key,
                    params_dict[params_dict_key].shape,
                )
        else:
            not_found_state_dict_list.append(k)

    not_found_key_set = set(params_dict.keys()) - set(params_dict_gen_map.keys())
    for not_found_key in not_found_key_set:
        _logger.warning(not_found_key)
    assert len(not_found_key_set) == 0
    return params_dict_gen_map


def make_module_onnx_tensor_gen_map_by_onnx_model(
    module: torch.nn.Module,
    onnx_model: str,
) -> dict:
    # TODO

    return params_dict_gen_map


def make_params_dict_by_module(
    module: torch.nn.Module, params_dict_gen_map: dict[str, dict]
):
    params_dict = {}

    module_state_dict: dict[str, torch.Tensor] = module.state_dict()

    op_map = {
        "rename": lambda name: module_state_dict[name],
        "reshape": lambda name, shape: module_state_dict[name].reshape(tuple(shape)),
        "transpose": lambda name, dims: module_state_dict[name].transpose(*dims),
    }

    for k, v in params_dict_gen_map.items():
        op = v["op"]
        args = v["args"]

        params_dict[k] = op_map[op](*args)

    return params_dict


def make_constant_params_dict_by_onnx_model(
    onnx_model_path,
):
    constant_params_dict = {}

    onnx_model = onnx.load(onnx_model_path)
    for node in onnx_model.graph.node:
        if node.op_type == "Constant":
            for output in node.output:
                if "Constant" in output:
                    attrs = OrderedDict(
                        (a.name, helper.get_attribute_value(a)) for a in node.attribute
                    )
                    ndarry = numpy_helper.to_array(attrs["value"])
                    try:
                        constant_params_dict[output] = torch.Tensor(ndarry.copy())
                    except Exception:
                        print(output, ndarry)
                        continue

    return constant_params_dict


================================================
FILE: module/openaimodel_tensorrt.py
================================================
from dataclasses import dataclass, field
from typing import Dict

import comfy.ldm.modules.diffusionmodules.openaimodel
import comfy.model_management
import comfy.model_patcher
import torch
import torch as th
import yaml

from .comfy_trace.openaimodel import (
    ForwardTimestepEmbedModule,
    origin_forward_timestep_embed,
)
from .tensorrt_wrapper import CallableTensorRTEngineWrapper, TensorRTEngineContext

TENSORRT_CONTEXT_KEY = "tensorrt_context"


@dataclass
class TensorRTEngineBlockContext:
    block_cache: Dict[str, CallableTensorRTEngineWrapper] = field(
        default_factory=lambda: {}
    )
    tensorrt_context: TensorRTEngineContext = field(
        default_factory=lambda: TensorRTEngineContext()
    )

    def dump_input_profile_info(self):
        input_shape_info_map = {}
        for key in sorted(self.block_cache):
            input_shape_info_map[key] = self.block_cache[key].input_shape_info
        print(yaml.safe_dump(input_shape_info_map))


class CallableTensorRTEngineWrapperDynamicShapeForwardTimestep(
    CallableTensorRTEngineWrapper
):
    args_name = [
        "x",
        "emb",
        "context",
        "output_shape_tensor",
        "time_context",
        "image_only_indicator",
    ]

    def gen_onnx_args(self, kwargs, module=None):
        args_name = []
        args = []
        for arg_name in self.args_name:
            args.append(kwargs.get(arg_name, None))
            if args[-1] is not None:
                args_name.append(arg_name)
        dynamic_axes = {
            "x": {0: "B", 2: "H", 3: "W"},
            "emb": {0: "B"},
            "context": {0: "B", 1: "E"},
            "output_shape_tensor": {0: "B", 2: "OH", 3: "OW"},
        }
        for k in list(dynamic_axes.keys()):
            if k not in args_name:
                dynamic_axes.pop(k)
        return args, args_name, dynamic_axes

    def gen_tensorrt_args(self, kwargs):
        input_shape_info = {}
        feed_dict = {}
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg is not None:
                feed_dict[arg_name] = arg
                input_shape_info[arg_name] = tuple(arg.shape)

        return feed_dict, input_shape_info

    def gen_tensorrt_args_profile(self, input_shape_info):
        min_input_profile_info = {
            "x": {0: 1, 2: 1, 3: 1},
            "emb": {0: 1},
            "context": {0: 1, 1: 77},
            "output_shape_tensor": {0: 1, 2: 1, 3: 1},
        }
        input_profile_info = {}
        for arg_name, shape_info in input_shape_info.items():
            min_shape_config = min_input_profile_info.get(arg_name, None)
            min_shape_info = list(shape_info)
            if min_shape_config is not None:
                for k, v in min_shape_config.items():
                    min_shape_info[k] = v
            input_profile_info[arg_name] = [
                tuple(min_shape_info),
                shape_info,
                shape_info,
            ]

        return input_profile_info


def hook_forward_timestep_embed(
    ts,
    x,
    emb,
    context=None,
    transformer_options={},
    output_shape=None,
    time_context=None,
    num_video_frames=None,
    image_only_indicator=None,
):
    module = ForwardTimestepEmbedModule(ts, transformer_options, num_video_frames)
    tensorrt_block_context: TensorRTEngineBlockContext = transformer_options.get(
        TENSORRT_CONTEXT_KEY, None
    )
    if tensorrt_block_context != None:
        block_key = str(transformer_options["block"])
        block = tensorrt_block_context.block_cache.get(block_key, None)
        if block is None:
            tensorrt_block_context.block_cache[block_key] = (
                CallableTensorRTEngineWrapperDynamicShapeForwardTimestep(
                    tensorrt_block_context.tensorrt_context, block_key
                )
            )
        return tensorrt_block_context.block_cache[block_key](
            module,
            x=x,
            emb=emb,
            context=context,
            output_shape_tensor=output_shape
            if output_shape is None
            else th.empty((output_shape), device=x.device, dtype=x.dtype),
            time_context=time_context,
            image_only_indicator=image_only_indicator,
        )
    return module(x, emb, context, time_context, image_only_indicator)


def do_hook_forward_timestep_embed():
    comfy.ldm.modules.diffusionmodules.openaimodel.forward_timestep_embed = (
        hook_forward_timestep_embed
    )


def undo_hook_forward_timestep_embed():
    comfy.ldm.modules.diffusionmodules.openaimodel.forward_timestep_embed = (
        origin_forward_timestep_embed
    )


class CallableTensorRTEngineWrapperDynamicShapeUNetModelForward(
    CallableTensorRTEngineWrapper
):
    args_name = [
        "x",
        "timesteps",
        "context",
        "y",
        "control",
    ]

    def gen_onnx_args(self, kwargs, module=None):
        dynamic_axes = {
            "x": {0: "B", 2: "H", 3: "W"},
            "timesteps": {0: "B"},
            "context": {0: "B", 1: "E"},
            "y": {0: "B"},
        }
        args_name = []
        args = []
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg is not None or not isinstance(
                module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)
            ):
                args.append(arg)
                if arg is not None:
                    if arg_name == "control":
                        control_params = arg
                        for key in control_params:
                            for i, v in enumerate(control_params[key]):
                                control_params_name = f"{arg_name}_{key}_{i}"
                                args_name.append(control_params_name)
                                dynamic_axes[control_params_name] = {
                                    0: "B",
                                    2: f"{control_params_name}_H",
                                    3: f"{control_params_name}_W",
                                }
                    else:
                        args_name.append(arg_name)
        if not isinstance(module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)):
            args.append({})
        for k in list(dynamic_axes.keys()):
            if k not in args_name:
                dynamic_axes.pop(k)
        return args, args_name, dynamic_axes

    def gen_tensorrt_args(self, kwargs):
        input_shape_info = {}
        feed_dict = {}
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg is not None:
                if arg_name == "control":
                    control_params = arg
                    for key in control_params:
                        for i, v in enumerate(control_params[key]):
                            control_params_name = f"{arg_name}_{key}_{i}"
                            feed_dict[control_params_name] = v
                            input_shape_info[control_params_name] = tuple(v.shape)
                else:
                    feed_dict[arg_name] = arg
                    input_shape_info[arg_name] = tuple(arg.shape)

        return feed_dict, input_shape_info

    def gen_tensorrt_args_profile(self, input_shape_info):
        min_input_profile_info = {
            "x": {0: 1, 2: 2, 3: 2},
            "timesteps": {0: 1},
            "context": {0: 1, 1: 77},
            "y": {0: 1},
        }
        input_profile_info = {}
        for arg_name, shape_info in input_shape_info.items():
            if arg_name.startswith("control"):
                min_shape_config = {0: 1, 2: 1, 3: 1}
            else:
                min_shape_config = min_input_profile_info.get(arg_name, None)
            min_shape_info = list(shape_info)
            if min_shape_config is not None:
                for k, v in min_shape_config.items():
                    min_shape_info[k] = v
            input_profile_info[arg_name] = [
                tuple(min_shape_info),
                shape_info,
                shape_info,
            ]

        return input_profile_info


================================================
FILE: module/patched_onnx_export/utils_2_4_0.py
================================================
# mypy: allow-untyped-defs
"""Functions to export models into the ONNX IR format.

These models can be loaded with the ONNX library and then
converted to models which run on other deep learning frameworks.
"""

from __future__ import annotations

import contextlib
import copy
import inspect
import io
import re
import textwrap
import typing
import warnings
from typing import (
    Any,
    Callable,
    Collection,
    Dict,
    List,
    Mapping,
    Optional,
    Sequence,
    Set,
    Tuple,
    Type,
    Union,
    cast,
)

import torch
import torch._C._onnx as _C_onnx
import torch.jit._trace
import torch.serialization
from torch import _C
from torch.onnx import (  # noqa: F401
    _constants,
    _exporter_states,
    errors,
    symbolic_caffe2,
    symbolic_helper,
)
from torch.onnx._globals import GLOBALS
from torch.onnx._internal import (
    _beartype,
    diagnostics,
    jit_utils,
    onnx_proto_utils,
    registration,
)

__all__ = [
    "is_in_onnx_export",
    "select_model_mode_for_export",
    "disable_apex_o2_state_dict_hook",
    "setup_onnx_logging",
    "exporter_context",
    "export",
    "model_signature",
    "warn_on_static_input_change",
    "unpack_quantized_tensor",
    "export_to_pretty_string",
    "unconvertible_ops",
    "register_custom_op_symbolic",
    "unregister_custom_op_symbolic",
]


def is_in_onnx_export() -> bool:
    """Returns whether it is in the middle of ONNX export."""
    return GLOBALS.in_onnx_export


# TODO(justinchuby): Remove dependency to this global variable from constant_fold.cpp
# Skip check due to cannot import IValue from torch._C
_params_dict = {}  # type: ignore[var-annotated]


@contextlib.contextmanager
@_beartype.beartype
def select_model_mode_for_export(model, mode: _C_onnx.TrainingMode):
    r"""A context manager to temporarily set the training mode of ``model``
    to ``mode``, resetting it when we exit the with-block.

    Args:
        model: Same type and meaning as ``model`` arg to :func:`export`.
        mode: Same type and meaning as ``training`` arg to :func:`export`.
    """
    if not isinstance(mode, _C_onnx.TrainingMode):
        raise TypeError(
            f"'mode' should be a torch.onnx.TrainingMode enum, but got '{type(mode)}'."
        )
    originally_training: bool = False

    if hasattr(model, "training"):
        originally_training = model.training

        # ONNX opset 12 has better support for training amenable models, with updated
        # versions of the dropout and batch_norm operators
        if mode == _C_onnx.TrainingMode.TRAINING or (
            mode == _C_onnx.TrainingMode.PRESERVE and originally_training
        ):
            GLOBALS.export_training = True
            if GLOBALS.export_onnx_opset_version < 12:
                warnings.warn(
                    "You are exporting the model in training mode with onnx opset "
                    f"version {GLOBALS.export_onnx_opset_version}. "
                    "Opset versions lower than opset 12 will not be able to export "
                    "nodes such as Dropout and BatchNorm correctly."
                )
        else:
            GLOBALS.export_training = False

        GLOBALS.training_mode = mode
        if mode == _C_onnx.TrainingMode.TRAINING:
            model.train(True)
        elif mode == _C_onnx.TrainingMode.EVAL:
            model.train(False)
        # else mode == _C_onnx.TrainingMode.PRESERVE, do nothing

    try:
        yield
    finally:
        if hasattr(model, "training") and not mode == _C_onnx.TrainingMode.PRESERVE:
            model.train(originally_training)


@contextlib.contextmanager
@_beartype.beartype
def disable_apex_o2_state_dict_hook(
    model: Union[torch.nn.Module, torch.jit.ScriptFunction],
):
    # Apex O2 hook state_dict to return fp16 weights as fp32.
    # Exporter cannot identify them as same tensors.
    # Since this hook is only used by optimizer, it is safe to
    # remove this hook while exporting.
    if not isinstance(model, torch.jit.ScriptFunction):
        model_hooks = {}  # type: ignore[var-annotated]
        for module in model.modules():
            for key, hook in module._state_dict_hooks.items():
                if type(hook).__name__ == "O2StateDictHook":
                    if module not in model_hooks:
                        model_hooks[module] = {}
                    model_hooks[module][key] = hook
            if module in model_hooks:
                for key in model_hooks[module]:
                    module._state_dict_hooks.pop(key)
        try:
            yield
        finally:
            # Add the hooks back
            for module, m_map in model_hooks.items():
                for key, hook in m_map.items():
                    module._state_dict_hooks[key] = hook
    else:
        try:
            yield
        finally:
            pass


@contextlib.contextmanager
@_beartype.beartype
def setup_onnx_logging(verbose: bool):
    is_originally_enabled = torch.onnx.is_onnx_log_enabled()
    if is_originally_enabled or verbose:
        torch.onnx.enable_log()
    try:
        yield
    finally:
        if not is_originally_enabled:
            torch.onnx.disable_log()


@contextlib.contextmanager
@_beartype.beartype
def exporter_context(model, mode: _C_onnx.TrainingMode, verbose: bool):
    with select_model_mode_for_export(
        model, mode
    ) as mode_ctx, disable_apex_o2_state_dict_hook(
        model
    ) as apex_ctx, setup_onnx_logging(
        verbose
    ) as log_ctx, diagnostics.create_export_diagnostic_context() as diagnostic_ctx:
        yield (mode_ctx, apex_ctx, log_ctx, diagnostic_ctx)


def export(
    model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],
    args: Union[Tuple[Any, ...], torch.Tensor],
    f: Optional[Union[str, io.BytesIO]] = None,
    export_params: bool = True,
    verbose: bool = False,
    training: _C_onnx.TrainingMode = _C_onnx.TrainingMode.EVAL,
    input_names: Optional[Sequence[str]] = None,
    output_names: Optional[Sequence[str]] = None,
    operator_export_type: _C_onnx.OperatorExportTypes = _C_onnx.OperatorExportTypes.ONNX,
    opset_version: Optional[int] = None,
    do_constant_folding: bool = True,
    dynamic_axes: Optional[
        Union[Mapping[str, Mapping[int, str]], Mapping[str, Sequence[int]]]
    ] = None,
    keep_initializers_as_inputs: Optional[bool] = None,
    custom_opsets: Optional[Mapping[str, int]] = None,
    export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,
    autograd_inlining: Optional[bool] = True,
    dynamo: bool = False,
) -> Optional[torch.onnx.ONNXProgram]:
    r"""Exports a model into ONNX format.

    If ``model`` is not a :class:`torch.jit.ScriptModule` nor a
    :class:`torch.jit.ScriptFunction`, this runs
    ``model`` once in order to convert it to a TorchScript graph to be exported
    (the equivalent of :func:`torch.jit.trace`). Thus this has the same limited support
    for dynamic control flow as :func:`torch.jit.trace`.

    Args:
        model (:class:`torch.nn.Module`, :class:`torch.jit.ScriptModule` or :class:`torch.jit.ScriptFunction`):
            the model to be exported.
        args (tuple or torch.Tensor):

            args can be structured either as:

            1. ONLY A TUPLE OF ARGUMENTS::

                args = (x, y, z)

            The tuple should contain model inputs such that ``model(*args)`` is a valid
            invocation of the model. Any non-Tensor arguments will be hard-coded into the
            exported model; any Tensor arguments will become inputs of the exported model,
            in the order they occur in the tuple.

            2. A TENSOR::

                args = torch.Tensor([1])

            This is equivalent to a 1-ary tuple of that Tensor.

            3. A TUPLE OF ARGUMENTS ENDING WITH A DICTIONARY OF NAMED ARGUMENTS::

                args = (
                    x,
                    {
                        "y": input_y,
                        "z": input_z
                    }
                )

            All but the last element of the tuple will be passed as non-keyword arguments,
            and named arguments will be set from the last element. If a named argument is
            not present in the dictionary, it is assigned the default value, or None if a
            default value is not provided.

            .. note::
                If a dictionary is the last element of the args tuple, it will be
                interpreted as containing named arguments. In order to pass a dict as the
                last non-keyword arg, provide an empty dict as the last element of the args
                tuple. For example, instead of::

                    torch.onnx.export(
                        model,
                        (
                            x,
                            # WRONG: will be interpreted as named arguments
                            {y: z}
                        ),
                        "test.onnx.pb"
                    )

                Write::

                    torch.onnx.export(
                        model,
                        (
                            x,
                            {y: z},
                            {}
                        ),
                        "test.onnx.pb"
                    )

        f: a file-like object (such that ``f.fileno()`` returns a file descriptor)
            or a string containing a file name.  A binary protocol buffer will be written
            to this file.
        export_params (bool, default True): if True, all parameters will
            be exported. Set this to False if you want to export an untrained model.
            In this case, the exported model will first take all of its parameters
            as arguments, with the ordering as specified by ``model.state_dict().values()``
        verbose (bool, default False): if True, prints a description of the
            model being exported to stdout. In addition, the final ONNX graph will include the
            field ``doc_string``` from the exported model which mentions the source code locations
            for ``model``. If True, ONNX exporter logging will be turned on.
        training (enum, default TrainingMode.EVAL):
            * ``TrainingMode.EVAL``: export the model in inference mode.
            * ``TrainingMode.PRESERVE``: export the model in inference mode if model.training is
                False and in training mode if model.training is True.
            * ``TrainingMode.TRAINING``: export the model in training mode. Disables optimizations
                which might interfere with training.
        input_names (list of str, default empty list): names to assign to the
            input nodes of the graph, in order.
        output_names (list of str, default empty list): names to assign to the
            output nodes of the graph, in order.
        operator_export_type (enum, default OperatorExportTypes.ONNX):

            * ``OperatorExportTypes.ONNX``: Export all ops as regular ONNX ops
                (in the default opset domain).
            * ``OperatorExportTypes.ONNX_FALLTHROUGH``: Try to convert all ops
                to standard ONNX ops in the default opset domain. If unable to do so
                (e.g. because support has not been added to convert a particular torch op to ONNX),
                fall back to exporting the op into a custom opset domain without conversion. Applies
                to `custom ops <https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html>`_
                as well as ATen ops. For the exported model to be usable, the runtime must support
                these non-standard ops.
            * ``OperatorExportTypes.ONNX_ATEN``: All ATen ops (in the TorchScript namespace "aten")
                are exported as ATen ops (in opset domain "org.pytorch.aten").
                `ATen <https://pytorch.org/cppdocs/#aten>`_ is PyTorch's built-in tensor library, so
                this instructs the runtime to use PyTorch's implementation of these ops.

                .. warning::

                    Models exported this way are probably runnable only by Caffe2.

                    This may be useful if the numeric differences in implementations of operators are
                    causing large differences in behavior between PyTorch and Caffe2 (which is more
                    common on untrained models).

            * ``OperatorExportTypes.ONNX_ATEN_FALLBACK``: Try to export each ATen op
                (in the TorchScript namespace "aten") as a regular ONNX op. If we are unable to do so
                (e.g. because support has not been added to convert a particular torch op to ONNX),
                fall back to exporting an ATen op. See documentation on OperatorExportTypes.ONNX_ATEN for
                context.
                For example::

                    graph(%0 : Float):
                    %3 : int = prim::Constant[value=0]()
                    # conversion unsupported
                    %4 : Float = aten::triu(%0, %3)
                    # conversion supported
                    %5 : Float = aten::mul(%4, %0)
                    return (%5)

                Assuming ``aten::triu`` is not supported in ONNX, this will be exported as::

                    graph(%0 : Float):
                    %1 : Long() = onnx::Constant[value={0}]()
                    # not converted
                    %2 : Float = aten::ATen[operator="triu"](%0, %1)
                    # converted
                    %3 : Float = onnx::Mul(%2, %0)
                    return (%3)

                .. warning::

                    Models exported this way are probably runnable only by Caffe2.

        opset_version (int, default 17): The version of the
            `default (ai.onnx) opset <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_
            to target. Must be >= 7 and <= 17.
        do_constant_folding (bool, default True): Apply the constant-folding optimization.
            Constant-folding will replace some of the ops that have all constant inputs
            with pre-computed constant nodes.
        dynamic_axes (dict[string, dict[int, string]] or dict[string, list(int)], default empty dict):

            By default the exported model will have the shapes of all input and output tensors
            set to exactly match those given in ``args``. To specify axes of tensors as
            dynamic (i.e. known only at run-time), set ``dynamic_axes`` to a dict with schema:

            * KEY (str): an input or output name. Each name must also be provided in ``input_names`` or
                ``output_names``.
            * VALUE (dict or list): If a dict, keys are axis indices and values are axis names. If a
                list, each element is an axis index.

            For example::

                class SumModule(torch.nn.Module):
                    def forward(self, x):
                        return torch.sum(x, dim=1)

                torch.onnx.export(
                    SumModule(),
                    (torch.ones(2, 2),),
                    "onnx.pb",
                    input_names=["x"],
                    output_names=["sum"]
                )

            Produces::

                input {
                  name: "x"
                  ...
                      shape {
                        dim {
                          dim_value: 2  # axis 0
                        }
                        dim {
                          dim_value: 2  # axis 1
                ...
                output {
                  name: "sum"
                  ...
                      shape {
                        dim {
                          dim_value: 2  # axis 0
                ...

            While::

                torch.onnx.export(
                    SumModule(),
                    (torch.ones(2, 2),),
                    "onnx.pb",
                    input_names=["x"],
                    output_names=["sum"],
                    dynamic_axes={
                        # dict value: manually named axes
                        "x": {0: "my_custom_axis_name"},
                        # list value: automatic names
                        "sum": [0],
                    }
                )

            Produces::

                input {
                  name: "x"
                  ...
                      shape {
                        dim {
                          dim_param: "my_custom_axis_name"  # axis 0
                        }
                        dim {
                          dim_value: 2  # axis 1
                ...
                output {
                  name: "sum"
                  ...
                      shape {
                        dim {
                          dim_param: "sum_dynamic_axes_1"  # axis 0
                ...

        keep_initializers_as_inputs (bool, default None): If True, all the
            initializers (typically corresponding to parameters) in the
            exported graph will also be added as inputs to the graph. If False,
            then initializers are not added as inputs to the graph, and only
            the non-parameter inputs are added as inputs.
            This may allow for better optimizations (e.g. constant folding) by
            backends/runtimes.

            If True, `deduplicate_initializers` pass will not be executed. This means
            initializers with duplicated values will not be deduplicated and
            will be treated as distinct inputs to the graph. This allows different
            input initializers to be supplied at the runtime following export.

            If ``opset_version < 9``, initializers MUST be part of graph
            inputs and this argument will be ignored and the behavior will be
            equivalent to setting this argument to True.

            If None, then the behavior is chosen automatically as follows:

            * If ``operator_export_type=OperatorExportTypes.ONNX``, the behavior is equivalent
                to setting this argument to False.
            * Else, the behavior is equivalent to setting this argument to True.

        custom_opsets (dict[str, int], default empty dict): A dict with schema:

            * KEY (str): opset domain name
            * VALUE (int): opset version

            If a custom opset is referenced by ``model`` but not mentioned in this dictionary,
            the opset version is set to 1. Only custom opset domain name and version should be
            indicated through this argument.

        export_modules_as_functions (bool or set of type of nn.Module, default False): Flag to enable
            exporting all ``nn.Module`` forward calls as local functions in ONNX. Or a set to indicate the
            particular types of modules to export as local functions in ONNX.
            This feature requires ``opset_version`` >= 15, otherwise the export will fail. This is because
            ``opset_version`` < 15 implies IR version < 8, which means no local function support.
            Module variables will be exported as function attributes. There are two categories of function
            attributes.

            1. Annotated attributes: class variables that have type annotations via
            `PEP 526-style <https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations>`_
            will be exported as attributes.
            Annotated attributes are not used inside the subgraph of ONNX local function because
            they are not created by PyTorch JIT tracing, but they may be used by consumers
            to determine whether or not to replace the function with a particular fused kernel.

            2. Inferred attributes: variables that are used by operators inside the module. Attribute names
            will have prefix "inferred::". This is to differentiate from predefined attributes retrieved from
            python module annotations. Inferred attributes are used inside the subgraph of ONNX local function.

            * ``False`` (default): export ``nn.Module`` forward calls as fine grained nodes.
            * ``True``: export all ``nn.Module`` forward calls as local function nodes.
            * Set of type of nn.Module: export ``nn.Module`` forward calls as local function nodes,
                only if the type of the ``nn.Module`` is found in the set.

        autograd_inlining (bool, default True): Flag used to control whether to inline autograd functions.
            Refer to https://github.com/pytorch/pytorch/pull/74765 for more details.

        dynamo (bool, default False): Whether to export the model with Dynamo instead of TorchScript.

    Raises:
        :class:`torch.onnx.errors.CheckerError`: If the ONNX checker detects an invalid ONNX graph.
        :class:`torch.onnx.errors.UnsupportedOperatorError`: If the ONNX graph cannot be exported because it
            uses an operator that is not supported by the exporter.
        :class:`torch.onnx.errors.OnnxExporterError`: Other errors that can occur during export.
            All errors are subclasses of :class:`errors.OnnxExporterError`.
    """

    if dynamo:
        # Unsupported parameters for dynamo export
        # TODO: These are not supported AT THE TIME
        warnings.warn(
            "f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, "
            "do_constant_folding, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions, and "
            "autograd_inlining are not supported for dynamo export at the moment."
        )
        # TODO: check args normalization
        args = _decide_input_format(model, args)
        kwargs = {}
        if args is not None and isinstance(args[-1], dict):
            kwargs = args[-1]
            args = args[:-1]
        # TODO: refactor this when we have migrated ExportedProgam and
        # needs users to specify dynamic_axes
        if dynamic_axes is None or not isinstance(dynamic_axes, dict):
            dynamic_shapes = False
        else:
            dynamic_shapes = True
            warnings.warn(
                "Specified dynamic axes is not supported for dynamo export at the moment."
            )
        # TODO: expose more ExportOptions?
        export_options = torch.onnx.ExportOptions(dynamic_shapes=dynamic_shapes)
        onnx_program = torch.onnx.dynamo_export(
            model, *args, **kwargs, export_options=export_options
        )
        if f is not None:
            onnx_program.save(f)
        return onnx_program

    if f is None:
        raise ValueError(
            "Export destination must be specified for torchscript-onnx export."
        )

    return _export(
        model,
        args,
        f,
        export_params,
        verbose,
        training,
        input_names,
        output_names,
        operator_export_type=operator_export_type,
        opset_version=opset_version,
        do_constant_folding=do_constant_folding,
        dynamic_axes=dynamic_axes,
        keep_initializers_as_inputs=keep_initializers_as_inputs,
        custom_opsets=custom_opsets,
        export_modules_as_functions=export_modules_as_functions,
        autograd_inlining=autograd_inlining,
    )


@_beartype.beartype
def _is_constant_tensor_list(node):
    if node.kind() != "prim::Constant":
        return False
    output_type = node.output().type()
    if output_type.isSubtypeOf(_C.ListType.ofTensors()):
        return True
    if output_type.isSubtypeOf(_C.ListType(_C.OptionalType.ofTensor())):
        return True


# ONNX can't handle constants that are lists of tensors, which can
# get generated in constant prop. So we split them back into prim::ListConstructs


@_beartype.beartype
def _split_tensor_list_constants(g, block):
    for node in block.nodes():
        for subblock in node.blocks():
            _split_tensor_list_constants(g, subblock)
        if _is_constant_tensor_list(node):
            inputs = []
            for val in node.output().toIValue():
                input = g.insertConstant(val)
                input.node().moveBefore(node)
                input.node().copyMetadata(node)
                inputs.append(input)

            lc = (
                g.create("prim::ListConstruct", inputs)
                .insertBefore(node)
                .output()
                .setType(_C.ListType.ofTensors())
            )
            lc.node().copyMetadata(node)
            node.output().replaceAllUsesWith(lc)


@_beartype.beartype
def _optimize_graph(
    graph: _C.Graph,
    operator_export_type: _C_onnx.OperatorExportTypes,
    _disable_torch_constant_prop: bool = False,
    fixed_batch_size: bool = False,
    params_dict=None,
    dynamic_axes=None,
    input_names=None,
    module=None,
):
    if params_dict is None:
        params_dict = {}

    # Inline everything
    _C._jit_pass_inline(graph)

    # Remove fork/wait nodes
    _C._jit_pass_inline_fork_wait(graph)
    _C._jit_pass_lint(graph)
    if GLOBALS.autograd_inlining:
        _C._jit_pass_onnx_autograd_function_process(graph)
    _C._jit_pass_lower_all_tuples(graph)

    # we now record some ops like ones/zeros
    # into a trace where we previously recorded constants.
    # use constant prop to maintain our current level of onnx support
    # without implementing symbolics for all of them
    if _disable_torch_constant_prop is False:
        _C._jit_pass_constant_propagation(graph)

    _split_tensor_list_constants(graph, graph)
    # run dce to eliminate dead parts of the graph that might have been
    # left behind by things like symbolic_override
    _C._jit_pass_dce(graph)
    _C._jit_pass_lint(graph)

    # CSE should improve perf when Autocast is used with disabled cache
    # Autocast is disabled due to a limitation on tracer as described at https://github.com/pytorch/pytorch/issues/84092
    # Must run before _C._jit_pass_erase_number_types to prevent type substitution
    if _C._jit_pass_cse(graph):
        _C._jit_pass_onnx_lint(graph)

    _C._jit_pass_canonicalize_graph_fuser_ops(graph)
    _C._jit_pass_lint(graph)
    _C._jit_pass_peephole(graph, True)
    _C._jit_pass_fuse_addmm(graph)
    _C._jit_pass_lint(graph)

    _C._jit_pass_peephole(graph, True)
    _C._jit_pass_lower_all_tuples(graph)
    # in _jit_pass_onnx, symbolic functions are called for each node for conversion.
    # However, there are nodes that cannot be converted without additional context.
    # For example, the number of outputs from split (and whether it is static or dynamic) is unknown
    # until the point where it is unpacked by listUnpack node.
    # This pass does a preprocess, and prepares the nodes such that enough context can be received
    # by the symbolic function.
    _C._jit_pass_onnx_remove_inplace_ops_for_onnx(graph, module)
    _C._jit_pass_onnx_preprocess(graph)

    # onnx does not support tuples, so try to remove them
    _C._jit_pass_lint(graph)

    # onnx only supports tensors, but 1 / 2 = 0.5 and tensor(1) / tensor(2) = 0
    _C._jit_pass_prepare_division_for_onnx(graph)

    _C._jit_pass_onnx_remove_print(graph)
    _C._jit_pass_onnx_preprocess_caffe2(graph)

    symbolic_helper._quantized_ops.clear()
    # Unpack quantized weights for conv and linear ops and insert into graph.
    _C._jit_pass_onnx_unpack_quantized_weights(
        graph, params_dict, symbolic_helper.is_caffe2_aten_fallback()
    )
    if symbolic_helper.is_caffe2_aten_fallback():
        # Insert permutes before and after each conv op to ensure correct order.
        _C._jit_pass_onnx_quantization_insert_permutes(graph, params_dict)

        # Find consecutive permutes that are no-ops and remove them.
        _C._jit_pass_custom_pattern_based_rewrite_graph(
            textwrap.dedent(
                """\
                graph(%Pi):
                    %Pq = quantized::nhwc2nchw(%Pi)
                    %Pr = quantized::nchw2nhwc(%Pq)
                    return (%Pr)"""
            ),
            textwrap.dedent(
                """\
                graph(%Ri):
                    return (%Ri)"""
            ),
            graph,
        )

    # onnx only supports tensors, so we turn all out number types into tensors
    _C._jit_pass_erase_number_types(graph)
    if GLOBALS.onnx_shape_inference:
        input_names = [] if input_names is None else input_names
        dynamic_axes = {} if dynamic_axes is None else dynamic_axes
        _C._jit_pass_onnx_set_dynamic_input_shape(graph, dynamic_axes, input_names)
    _C._jit_pass_onnx_lint(graph)

    graph = _C._jit_pass_onnx(graph, operator_export_type)
    _C._jit_pass_onnx_lint(graph)
    _C._jit_pass_lint(graph)

    _C._jit_pass_onnx_scalar_type_analysis(
        graph, True, GLOBALS.export_onnx_opset_version
    )
    _C._jit_pass_lint(graph)

    _C._jit_pass_onnx_peephole(
        graph, GLOBALS.export_onnx_opset_version, fixed_batch_size
    )
    _C._jit_pass_lint(graph)

    # graph is not a valid jit graph anymore because types have been replaced
    # (e.g. int with Tensor), so it now contains operators that don't actually
    # exist. We can't run normal dead code elimination because it'd fail trying
    # to look up if an operator has side effects, but we can run a dead code
    # elimination variant that doesn't need to look up if an op has side effects.
    _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)
    _C._jit_pass_lint(graph)
    graph = _C._jit_pass_canonicalize(graph)
    _C._jit_pass_lint(graph)
    if GLOBALS.onnx_shape_inference:
        try:
            _C._jit_pass_onnx_graph_shape_type_inference(
                graph, params_dict, GLOBALS.export_onnx_opset_version
            )
        except RuntimeError as exc:
            if (
                _C_onnx._CAFFE2_ATEN_FALLBACK
                and exc.args[0]
                == "ScalarType UNKNOWN_SCALAR is an unexpected tensor scalar type!"
            ):
                # Caffe2 builds can have UNKNOWN_SCALAR for some tensors
                pass

    return graph


@_beartype.beartype
def warn_on_static_input_change(input_states):
    """Warns that changes to input dictionaries and strings won't take effect in the traced ONNX graph.

    We accept dictionaries and strings as ONNX inputs, but they should be only for
    configuration use. we detect here if these inputs are modified, and if so we warn
    the user that the changes won't take effect in the traced ONNX graph.
    """
    for input, traced_input in zip(input_states[0], input_states[1]):
        if isinstance(input, dict):
            if list(input.keys()) != list(traced_input.keys()):
                warning = (
                    "We detected that you are modifying a dictionary that is an input to your "
                    "model. "
                    "Note that dictionaries are allowed as inputs in ONNX but they should be "
                    "handled with care. "
                    "Usages of dictionaries is not recommended, and should not be used except "
                    "for configuration use. "
                    "Also note that the order and values of the keys must remain the same. "
                )
                warnings.warn(warning)
        elif isinstance(input, str):
            if input != traced_input:
                warning = (
                    "The model seems to have string inputs/outputs. "
                    "Note that strings will not appear as inputs/outputs of the ONNX graph. "
                )
                warnings.warn(warning)


@_beartype.beartype
def _resolve_args_by_export_type(arg_name, arg_value, operator_export_type):
    """Resolves the arguments that are ignored when export_type != operator_export_type.ONNX."""
    if (
        operator_export_type is not operator_export_type.ONNX
        and _C_onnx._CAFFE2_ATEN_FALLBACK
    ):
        if arg_value is True:
            warnings.warn(
                f"'{arg_name}' can be set to True only when 'operator_export_type' is "
                "`ONNX`. Since 'operator_export_type' is not set to 'ONNX', "
                f"'{arg_name}' argument will be ignored."
            )
        arg_value = False
    return arg_value


@_beartype.beartype
def _decide_keep_init_as_input(
    keep_initializers_as_inputs: Optional[bool],
    operator_export_type: _C_onnx.OperatorExportTypes,
    opset_version: int,
):
    """Decides whether the initializers in the graph should be listed as ONNX graph inputs.

    This method encapsulates the logic to decide whether the initializers in the graph
    should be listed as ONNX graph inputs (i.e., whether to choose ONNX IR v3 or v4).
    If keep_initializers_as_inputs is not specified (None), then we decide whether to keep
    initializers as graph inputs (val_keep_init_as_ip) based on export type. If export type
    is ONNX, then do not keep initializers as input (val_keep_init_as_ip=False). For all other
    export types keep initializers as input (val_keep_init_as_ip=True).
    If keep_initializers_as_inputs is specified, then respect it. Unless opset version <= 8,
    in which case it must be ignored because for opset version <= 8, all initializers MUST be
    part of graph input (only ONNX IR v3 is allowed), i.e. val_keep_init_as_ip=True.

    Special handling is needed for opset version 8 or lower, because irrespective
    of user input for keep_initializers_as_inputs, the graph must follow ONNX IR v3
    semantics, i.e. all initializers must be listed as ONNX graph input.
    """

    if opset_version < 9:
        if keep_initializers_as_inputs is False:
            warnings.warn(
                "Setting 'keep_initializers_as_inputs=False' for opset version"
                "8 or lower would lead to an invalid ONNX graph. Therefore, "
                "'keep_initializers_as_inputs=False' is ignored during export."
                "Exported model will have initializers as graph inputs (compliant "
                " to ONNX IR v3)."
            )
        return True  # i.e. True == initializers are part of graph input (ONNX IR v3)
    val_keep_init_as_ip = (
        True if keep_initializers_as_inputs is None else keep_initializers_as_inputs
    )
    if (
        keep_initializers_as_inputs is None
        and operator_export_type is _C_onnx.OperatorExportTypes.ONNX
    ):
        val_keep_init_as_ip = False
    return val_keep_init_as_ip


@_beartype.beartype
def _decide_add_node_names(add_node_names, operator_export_type):
    return _resolve_args_by_export_type(
        "add_node_names", add_node_names, operator_export_type
    )


@_beartype.beartype
def _decide_constant_folding(do_constant_folding, operator_export_type, training):
    do_constant_folding = _resolve_args_by_export_type(
        "do_constant_folding", do_constant_folding, operator_export_type
    )
    if do_constant_folding and (
        training is not None and training is not _C_onnx.TrainingMode.EVAL
    ):
        warnings.warn(
            "It is recommended that constant folding be turned off ('do_constant_folding=False') "
            "when exporting the model in training-amenable mode, i.e. with 'training=TrainingMode.TRAIN' "
            "or 'training=TrainingMode.PRESERVE' (when model is in training mode). Otherwise, some "
            "learnable model parameters may not translate correctly in the exported ONNX model "
            "because constant folding mutates model parameters. Please consider "
            "turning off constant folding or setting the training=TrainingMode.EVAL."
        )
    return do_constant_folding


@_beartype.beartype
def _signature(model) -> inspect.Signature:
    should_be_callable = getattr(model, "forward", model)
    if callable(should_be_callable):
        return inspect.signature(should_be_callable)
    raise ValueError("model has no forward method and is not callable")


@_beartype.beartype
def _decide_input_format(model, args):
    try:
        sig = _signature(model)
    except ValueError as e:
        warnings.warn(f"{e}, skipping _decide_input_format")
        return args
    try:
        ordered_list_keys = list(sig.parameters.keys())
        if ordered_list_keys[0] == "self":
            ordered_list_keys = ordered_list_keys[1:]
        args_dict: Dict = {}
        if isinstance(args, list):
            args_list = args
        elif isinstance(args, tuple):
            args_list = list(args)
        else:
            args_list = [args]
        if isinstance(args_list[-1], dict):
            args_dict = args_list[-1]
            args_list = args_list[:-1]
        n_nonkeyword = len(args_list)
        for optional_arg in ordered_list_keys[n_nonkeyword:]:
            if optional_arg in args_dict:
                args_list.append(args_dict[optional_arg])
            # Check if this arg has a default value
            else:
                param = sig.parameters[optional_arg]
                if param.default != param.empty:
                    args_list.append(param.default)
        args = args_list if isinstance(args, list) else tuple(args_list)
    # Cases of models with no input args
    except IndexError:
        warnings.warn("No input args, skipping _decide_input_format")
    except Exception as e:
        warnings.warn(f"Skipping _decide_input_format\n {e.args[0]}")
    return args


@_beartype.beartype
def _trace(func, args, operator_export_type, return_outs=False):
    # Special case for common case of passing a single Tensor
    if isinstance(args, torch.Tensor):
        args = (args,)

    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
        func,
        args,
        strict=False,
        _force_outplace=False,
        _return_inputs_states=True,
    )
    warn_on_static_input_change(inputs_states)

    trace_graph = _optimize_graph(trace_graph, operator_export_type, params_dict={})
    if return_outs:
        return trace_graph, torch_out
    return trace_graph


@_beartype.beartype
def _trace_and_get_graph_from_model(model, args):
    # A basic sanity check: make sure the state_dict keys are the same
    # before and after running the model.  Fail fast!
    orig_state_dict_keys = torch.jit._unique_state_dict(model).keys()

    # Disable Autocast cache because it replaces kernel's weight and bias
    # by (undesired) constants.
    # No perf impact for when there are reused weights since https://github.com/pytorch/pytorch/pull/85665
    prev_autocast_cache_enabled = torch.is_autocast_cache_enabled()
    torch.set_autocast_cache_enabled(False)
    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
        model,
        args,
        strict=False,
        _force_outplace=False,
        _return_inputs_states=True,
    )
    torch.set_autocast_cache_enabled(prev_autocast_cache_enabled)

    warn_on_static_input_change(inputs_states)

    if orig_state_dict_keys != torch.jit._unique_state_dict(model).keys():
        raise RuntimeError(
            "state_dict changed after running the tracer; "
            "something weird is happening in your model!"
        )

    return trace_graph, torch_out


@_beartype.beartype
def _get_param_count_list(method_graph, args_params):
    param_count_list = []
    for input_, arg_params_ in zip(method_graph.inputs(), args_params):
        if "PackedParams" in str(input_.type()):
            in_vars, _ = torch.jit._flatten(arg_params_)
            param_count_list.append(len(in_vars))
        else:
            param_count_list.append(arg_params_ is not None)

    return param_count_list


@_beartype.beartype
def _check_flatten_did_not_remove(original, jit_flattened):
    """torch.jit._flatten removes None. Check if it did so in this case."""

    @_beartype.beartype
    def flatten(x):
        if isinstance(x, (list, tuple)):
            for inner in x:
                yield from flatten(inner)
        elif isinstance(x, dict):
            for inner in x.values():
                yield from flatten(inner)
        else:
            yield x

    flattened_with_none = list(flatten(original))
    num_none = len(flattened_with_none) - len(jit_flattened)
    assert num_none >= 0
    if num_none:
        raise ValueError(
            f"args contained {num_none} None's after flattening. "
            "When exporting a ScriptModule or ScriptFunction, no args may "
            "be None because that breaks type propagation."
        )


def _create_jit_graph(
    model: Union[torch.nn.Module, torch.jit.ScriptFunction], args: Sequence[Any]
) -> Tuple[_C.Graph, List[_C.IValue], Optional[Any], Optional[_C.ScriptModule]]:
    if isinstance(model, (torch.jit.ScriptFunction, torch.jit.ScriptModule)):
        flattened_args = tuple(torch.jit._flatten(tuple(args))[0])
        _check_flatten_did_not_remove(args, flattened_args)
        torch_out = None

        if isinstance(model, torch.jit.ScriptModule):
            try:
                graph = model.forward.graph  # type: ignore[attr-defined]
            except AttributeError as e:
                raise RuntimeError("'forward' method must be a script method") from e
            _C._jit_pass_onnx_function_substitution(graph)
            freezed_module = _C._freeze_module(
                cast(_C.ScriptModule, model._c), preserveParameters=True
            )
            module, params = _C._jit_onnx_list_model_parameters(freezed_module)
            method_graph = module._get_method("forward").graph
            args_params = tuple(args) + tuple(params)
            param_count_list = _get_param_count_list(method_graph, args_params)
            in_vars, _ = torch.jit._flatten(args_params)
            graph = _C._propagate_and_assign_input_shapes(
                method_graph, tuple(in_vars), param_count_list, False, False
            )
            return graph, params, torch_out, module

        # torch.jit.ScriptFunction
        params = []
        graph = model.graph
        _C._jit_pass_onnx_function_substitution(graph)
        param_count_list = _get_param_count_list(graph, args)
        graph = _C._propagate_and_assign_input_shapes(
            graph, flattened_args, param_count_list, False, False
        )
        return graph, params, torch_out, None

    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    _C._jit_pass_onnx_lint(graph)
    state_dict = torch.jit._unique_state_dict(model)
    params = list(state_dict.values())
    graph_inputs = list(graph.inputs())
    user_input_num = len(graph_inputs) - len(state_dict)
    param_names = list(state_dict.keys())
    for i, inp in enumerate(graph_inputs):
        if i >= user_input_num:
            inp.setDebugName(param_names[i - user_input_num])
    _C._jit_pass_onnx_function_substitution(graph)
    return graph, params, torch_out, None


@_beartype.beartype
def _get_named_param_dict(graph, params):
    input_and_param_names = [val.debugName() for val in graph.inputs()]
    param_names = input_and_param_names[len(input_and_param_names) - len(params) :]
    _params_dict = dict(zip(param_names, params))
    return _params_dict


@_beartype.beartype
def _get_example_outputs(model, args):
    input_args = copy.deepcopy(args)
    input_kwargs = {}
    if input_args and isinstance(input_args[-1], dict):
        input_kwargs = input_args[-1]
        input_args = input_args[:-1]

    example_outputs = model(*input_args, **input_kwargs)
    if isinstance(example_outputs, list):
        example_outputs = [example_outputs]
    elif not isinstance(example_outputs, tuple):
        example_outputs = (example_outputs,)

    return example_outputs


_qtype_vtype_map = {
    torch.quint8: torch.uint8,
    torch.qint8: torch.int8,
    torch.qint32: torch.int32,
    torch.quint4x2: torch.int8,
}


@_beartype.beartype
def unpack_quantized_tensor(value, cast_onnx_accepted=True):
    if isinstance(value, torch.Tensor) and value.dtype in _qtype_vtype_map:
        q_value_dequantize = value.dequantize()
        q_scale = (
            torch.tensor(value.q_scale(), dtype=torch.double)
            if cast_onnx_accepted
            else torch.tensor(value.q_scale(), dtype=torch.float32)
        )
        q_zero_point = (
            torch.tensor(value.q_zero_point(), dtype=torch.int64)
            if cast_onnx_accepted
            else torch.tensor(value.q_zero_point(), dtype=_qtype_vtype_map[value.dtype])
        )
        q_value = q_value_dequantize / q_scale + q_zero_point
        q_value = q_value.to(dtype=_qtype_vtype_map[value.dtype])
        return q_value, q_scale, q_zero_point
    else:
        return (value,)


@_beartype.beartype
def _pre_trace_quant_model(model, args):
    r"""Returns `torch.jit.trace(model, args)` if model is quantized. Otherwise do nothing and return
    original model.

    This is due to https://github.com/pytorch/pytorch/issues/75761.
    """
    if any(
        hasattr(m, "_packed_params") for m in getattr(model, "modules", list)()
    ) or any(getattr(arg, "is_quantized", False) for arg in args):
        return torch.jit.trace(model, args)
    return model


@_beartype.beartype
def _model_to_graph(
    model,
    args,
    verbose=False,
    input_names=None,
    output_names=None,
    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,
    do_constant_folding=True,
    _disable_torch_constant_prop=False,
    fixed_batch_size=False,
    training=_C_onnx.TrainingMode.EVAL,
    dynamic_axes=None,
) -> Tuple[
    _C.Graph,
    Dict[str, torch.Tensor],
    Optional[
        Union[
            torch.Tensor,
            Tuple[torch.Tensor, ...],
            List[torch.Tensor],
            Dict[str, torch.Tensor],
            Any,  # Can be nested tuples etc.
        ]
    ],
]:
    """Converts model into an ONNX graph.

    Returns:
        graph: A TorchScript IR Graph with ONNX nodes.
        params_dict: Dict from input param name to param value.
        torch_out: The output tensors resulting from the trace of ``model``.
            If ``model`` is a :class:`torch.jit.ScriptModule` or :class:`torch.jit.ScriptFunction`,
            this will be None, since we are not doing any tracing.
    """
    # TODO: can we simplify this to always return a tuple of Tensor or None?

    # Special case for common case of passing a single Tensor
    if isinstance(args, (torch.Tensor, int, float, bool)):
        args = (args,)

    model = _pre_trace_quant_model(model, args)
    graph, params, torch_out, module = _create_jit_graph(model, args)
    params_dict = _get_named_param_dict(graph, params)

    try:
        graph = _optimize_graph(
            graph,
            operator_export_type,
            _disable_torch_constant_prop=_disable_torch_constant_prop,
            fixed_batch_size=fixed_batch_size,
            params_dict=params_dict,
            dynamic_axes=dynamic_axes,
            input_names=input_names,
            module=module,
        )
    except Exception as e:
        torch.onnx.log("Torch IR graph at exception: ", graph)
        raise

    is_script = isinstance(model, (torch.jit.ScriptFunction, torch.jit.ScriptModule))
    if is_script:
        example_outputs = _get_example_outputs(model, args)
        example_outputs_final = ()
        for example_output in example_outputs:
            example_outputs_final += unpack_quantized_tensor(example_output)
        out_vars, desc = torch.jit._flatten(example_outputs_final)
        _C._jit_pass_onnx_assign_output_shape(
            graph,
            out_vars,
            desc,
            GLOBALS.onnx_shape_inference,
            is_script,
            GLOBALS.export_onnx_opset_version,
        )

    # NB: ONNX requires complete information about output types, which might be
    # erased by some optimizations, so we need to set it explicitly again.
    else:
        if not isinstance(torch_out, (list, tuple)):
            output_wrapped = [torch_out]
        else:
            output_wrapped = torch_out  # type: ignore[assignment]

        output_tensors, out_desc = torch.jit._flatten(tuple(output_wrapped))
        # assign_output_shape pass is not compatible with quantized outputs.
        # Quantized outputs are flattened to 3 values in ONNX, while packed as
        # single value in PyTorch.
        if not any(getattr(out, "is_quantized", False) for out in output_tensors):
            _C._jit_pass_onnx_assign_output_shape(
                graph,
                output_tensors,
                out_desc,
                GLOBALS.onnx_shape_inference,
                is_script,
                GLOBALS.export_onnx_opset_version,
            )

    _set_input_and_output_names(graph, input_names, output_names)
    params_dict = _get_named_param_dict(graph, params)

    if (
        do_constant_folding
        and GLOBALS.export_onnx_opset_version
        >= _constants.ONNX_CONSTANT_FOLDING_MIN_OPSET
    ):
        if training is None or training == _C_onnx.TrainingMode.EVAL:
            params_dict = _C._jit_pass_onnx_eval_peephole(graph, params_dict)

        params_dict = _C._jit_pass_onnx_constant_fold(
            graph, params_dict, GLOBALS.export_onnx_opset_version
        )
        _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)

    if GLOBALS.onnx_shape_inference:
        try:
            _C._jit_pass_onnx_graph_shape_type_inference(
                graph, params_dict, GLOBALS.export_onnx_opset_version
            )
        except RuntimeError as exc:
            if (
                _C_onnx._CAFFE2_ATEN_FALLBACK
                and exc.args[0]
                == "ScalarType UNKNOWN_SCALAR is an unexpected tensor scalar type!"
            ):
                # Caffe2 builds can have UNKNOWN_SCALAR for some tensors
                pass

    params_dict = _C._jit_pass_onnx_eliminate_unused_items(graph, params_dict)

    # For ONNX opset < 9, constants only have three data types: float16, float, double.
    # In this pass transform constants of other data types to float/double + cast operator.
    if GLOBALS.export_onnx_opset_version < 9:
        _C._jit_pass_onnx_cast_all_constant_to_floating(graph)

    params_dict = _C._jit_pass_filter_non_tensor_arguments(params_dict)
    _C._jit_decay_packed_param_input_types(graph)

    # If output names lack a proper name and are identified only by their unique
    # give them a legible name for debugging purposes
    _apply_friendly_debug_names(graph, params_dict)

    return graph, params_dict, torch_out


@_beartype.beartype
@torch._disable_dynamo
def export_to_pretty_string(
    model,
    args,
    export_params=True,
    verbose=False,
    training=_C_onnx.TrainingMode.EVAL,
    input_names=None,
    output_names=None,
    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,
    export_type=None,
    google_printer=False,
    opset_version=None,
    keep_initializers_as_inputs=None,
    custom_opsets=None,
    add_node_names=True,
    do_constant_folding=True,
    dynamic_axes=None,
):
    r"""
    Similar to :func:`export`, but returns a text representation of the ONNX
    model. Only differences in args listed below. All other args are the same
    as :func:`export`.

    Args:
        add_node_names (bool, default True): Whether or not to set
            NodeProto.name. This makes no difference unless
            ``google_printer=True``.
        google_printer (bool, default False): If False, will return a custom,
            compact representation of the model. If True will return the
            protobuf's `Message::DebugString()`, which is more verbose.

    Returns:
        A UTF-8 str containing a human-readable representation of the ONNX model.
    """
    if opset_version is None:
        opset_version = _constants.ONNX_DEFAULT_OPSET
    if custom_opsets is None:
        custom_opsets = {}
    GLOBALS.export_onnx_opset_version = opset_version
    GLOBALS.operator_export_type = operator_export_type

    with exporter_context(model, training, verbose):
        val_keep_init_as_ip = _decide_keep_init_as_input(
            keep_initializers_as_inputs, operator_export_type, opset_version
        )
        val_add_node_names = _decide_add_node_names(
            add_node_names, operator_export_type
        )
        val_do_constant_folding = _decide_constant_folding(
            do_constant_folding, operator_export_type, training
        )
        args = _decide_input_format(model, args)
        graph, params_dict, torch_out = _model_to_graph(
            model,
            args,
            verbose,
            input_names,
            output_names,
            operator_export_type,
            val_do_constant_folding,
            training=training,
            dynamic_axes=dynamic_axes,
        )

        return graph._pretty_print_onnx(  # type: ignore[attr-defined]
            params_dict,
            opset_version,
            False,
            operator_export_type,
            google_printer,
            val_keep_init_as_ip,
            custom_opsets,
            val_add_node_names,
        )


@_beartype.beartype
def unconvertible_ops(
    model,
    args,
    training: _C_onnx.TrainingMode = _C_onnx.TrainingMode.EVAL,
    opset_version: Optional[int] = None,
) -> Tuple[_C.Graph, List[str]]:
    """Returns an approximated list of all ops that are yet supported by :mod:`torch.onnx`.

    The list is approximated because some ops may be removed during the conversion
    process and don't need to be converted. Some other ops may have partial support
    that will fail conversion with particular inputs. Please open a Github Issue
    for op support requests.

    Args:
        model: Same as the `model` parameter in :func:`torch.onnx.export`.
        args: Same as the `args` parameter in :func:`torch.onnx.export`.
        training: Same as the `training` parameter in :func:`torch.onnx.export`.
        opset_version: Same as the `opset_version` parameter in :func:`torch.onnx.export`.

    Returns:
        The JIT graph and a list of unconvertible ops in the format of "domain::op".
    """

    opset_version = opset_version or _constants.ONNX_DEFAULT_OPSET
    GLOBALS.export_onnx_opset_version = opset_version

    try:
        with exporter_context(model, training, verbose=False):
            # Create a mostly clean JIT graph that contains the plain aten and
            # other ops we can check with the symbolic registry.
            # NOTE: We don't want to actually convert any ops to ONNX or run any
            # symbolic functions because there is a higher chance that a pass
            # fails or an unconvertible op messes up the graph during ONNX conversion.
            # This way we can always generate a list just by looking at the names
            # of the ops in the graph.
            args = _decide_input_format(model, args)
            model = _pre_trace_quant_model(model, args)
            graph, _, _, module = _create_jit_graph(model, args)
            _C._jit_pass_inline(graph)
            _C._jit_pass_onnx_remove_inplace_ops_for_onnx(graph, module)
            _C._jit_pass_erase_number_types(graph)
            _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)
    except Exception as e:
        raise errors.OnnxExporterError(
            "Failed to discover unconvertible ops because of errors during the JIT graph "
            "generation process."
        ) from e

    unsupported_ops = []
    for node in graph.nodes():
        domain_op = node.kind()
        if domain_op.startswith(("onnx::", "prim::")):
            # We consider onnx and prim ops as supported ops, even though some "prim"
            # ops are not implemented as symbolic functions, because they may be
            # eliminated in the conversion passes. Users may still see errors caused
            # by prim ops even though they don't show up in the list.
            continue
        if not registration.registry.is_registered_op(
            domain_op.rstrip("_"), opset_version
        ):
            # We consider all registered ops supported, even though some of them are
            # only partially supported, because there is not yet a good way to check
            # if an op is fully supported.
            # TODO(justinchuby): Create a way to check if an op is fully supported.
            unsupported_ops.append(domain_op)
    return graph, unsupported_ops


@_beartype.beartype
def _setup_trace_module_map(
    model: Union[torch.nn.Module, torch.jit.ScriptModule],
    export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]],
) -> Set[str]:
    def __register_attribute_hook():
        attr_name = "_onnx_attrs"

        def _track_module_attributes_forward_pre_hook(module, input):
            setattr(module, attr_name, _get_module_attributes(module))

        def _track_module_attributes_forward_hook(module, input, output):
            tracing_state = _C._get_tracing_state()
            if not tracing_state:
                return

            graph = tracing_state.graph()
            onnx_attrs = {}
            if hasattr(module, attr_name):
                onnx_attrs = getattr(module, attr_name)
                delattr(module, attr_name)

            _C._jit_pass_onnx_track_scope_attributes(graph, onnx_attrs)

        for m in model.modules():
            m.register_forward_hook(_track_module_attributes_forward_hook)
            m.register_forward_pre_hook(_track_module_attributes_forward_pre_hook)

    def _unqualified_variable_name(qualified_name: str) -> str:
        """
        Parse qualified variable name and return the unqualified version.

        Pure numeric atoms are considered inadequate, so this function will look past them,
        and start from the first non-numeric atom.

        Example:
            >>> _unqualified_variable_name('__main__.Foo.bar')
            'bar'
            >>> _unqualified_variable_name('__main__.Foo.bar.0')
            'bar.0'
        """
        name_atoms = qualified_name.split(".")
        for i, atom in reversed(list(enumerate(name_atoms))):
            if not atom.isnumeric():
                return ".".join(name_atoms[i:])
        return qualified_name

    trace_module_map = {
        _m: torch._C._jit_onnx_create_full_scope_name(
            torch.typename(type(_m)), _unqualified_variable_name(_n)
        )
        for _n, _m in model.named_modules()
    }
    torch.jit._trace._trace_module_map = trace_module_map
    if isinstance(export_modules_as_functions, bool) and export_modules_as_functions:
        module_typenames = {torch.typename(type(module)) for module in trace_module_map}
    elif isinstance(export_modules_as_functions, set) and export_modules_as_functions:

        def _find_typename(v):
            if isinstance(v, type):
                return torch.typename(v)
            else:
                raise RuntimeError(
                    "Only type of the `nn.Module` should be "
                    "passed in the set for argument `export_modules_as_functions`. "
                    f"Got `{type(v).__name__}`."
                )

        module_typenames = {_find_typename(v) for v in export_modules_as_functions}
    else:
        module_typenames = set()

    if module_typenames:
        __register_attribute_hook()

    return module_typenames


@_beartype.beartype
def _reset_trace_module_map():
    torch.jit._trace._trace_module_map = None
    _C._jit_pass_onnx_clear_scope_records()


@_beartype.beartype
def _get_module_attributes(module):
    annotations = typing.get_type_hints(type(module))
    base_m_annotations = typing.get_type_hints(torch.nn.Module)
    [annotations.pop(k, None) for k in base_m_annotations]
    # Check whether module attributes can be accessed. Some classes
    # define attributes but don't provide access to them in their
    # constructor.
    #
    # For example, torch.nn.Embedding has the `freeze` variable and its
    # type specified in the class but the attribute is not created in the
    # constructor. In other words, there is no `self.freeze = <True | False>`
    # in the constructor.
    #
    # Reference: https://github.com/pytorch/pytorch/blob/92de1d322223fb5584e384971b32c46b93bc2f4b/torch/nn/modules/sparse.py#L120
    attrs = {}
    for k in annotations:
        try:
            attrs[k] = getattr(module, k)
        except AttributeError:
            torch.onnx.log(f"Skipping module attribute '{k}'")
            continue
    return attrs


@_beartype.beartype
def _export(
    model,
    args,
    f,
    export_params=True,
    verbose=False,
    training=_C_onnx.TrainingMode.EVAL,
    input_names=None,
    output_names=None,
    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,
    export_type=None,
    opset_version=None,
    do_constant_folding=True,
    dynamic_axes=None,
    keep_initializers_as_inputs=None,
    fixed_batch_size=False,
    custom_opsets=None,
    add_node_names=True,
    onnx_shape_inference=True,
    export_modules_as_functions=False,
    autograd_inlining=True,
):
    assert GLOBALS.in_onnx_export is False

    if export_type is None:
        export_type = _exporter_states.ExportTypes.PROTOBUF_FILE

    # Discussed deprecation with Nikita Shulga and Sergii Dymchenko from Meta
    if _C_onnx._CAFFE2_ATEN_FALLBACK:
        warnings.warn(
            "Caffe2 ONNX exporter is deprecated in version 2.0 and will be "
            "removed in 2.2. Please use PyTorch 2.1 or older for this capability.",
            category=FutureWarning,
            stacklevel=2,
        )

    if isinstance(model, torch.nn.DataParallel):
        raise ValueError(
            "torch.nn.DataParallel is not supported by ONNX "
            "exporter, please use 'attribute' module to "
            "unwrap model from torch.nn.DataParallel. Try "
            "torch.onnx.export(model.module, ...)"
        )

    GLOBALS.onnx_shape_inference = onnx_shape_inference

    if opset_version is None:
        opset_version = _constants.ONNX_DEFAULT_OPSET

    # torch.onnx.export does not support opset versions >=18
    if opset_version > _constants.ONNX_TORCHSCRIPT_EXPORTER_MAX_OPSET:
        # We do not want to fail because we should still allow users to create
        # custom symbolic functions for opset>17
        warnings.warn(
            f"Exporting to ONNX opset version {opset_version} is not supported. "
            f"by 'torch.onnx.export()'. "
            f"The highest opset version supported is {_constants.ONNX_TORCHSCRIPT_EXPORTER_MAX_OPSET}. "
            f"To use a newer opset version, consider 'torch.onnx.dynamo_export()'. "
            f"Note that dynamo_export() is in preview. Please report errors with "
            f"dynamo_export() as Github issues to https://github.com/pytorch/pytorch/issues.",
            category=errors.OnnxExporterWarning,
        )

    if export_modules_as_functions and opset_version < 15:
        raise ValueError(
            "`export_modules_as_functions` is not supported for `opset_version` < 15."
            "This is because `opset_version` < 15 implies IR version < 8, which means "
            "no local function support. "
        )
    if not operator_export_type:
        if _C_onnx._CAFFE2_ATEN_FALLBACK:
            operator_export_type = _C_onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK
        else:
            operator_export_type = _C_onnx.OperatorExportTypes.ONNX

    # By default, training=TrainingMode.EVAL,
    # which is good because running a model in training mode could result in
    # internal buffers getting updated, dropout getting applied, etc.
    # If you really know what you're doing, you can turn
    # training=TrainingMode.TRAINING or training=TrainingMode.PRESERVE,
    # (to preserve whatever the original training mode was.)
    GLOBALS.export_onnx_opset_version = opset_version
    GLOBALS.operator_export_type = operator_export_type

    try:
        GLOBALS.in_onnx_export = True
        _autograd_inlining_previous = GLOBALS.autograd_inlining
        GLOBALS.autograd_inlining = autograd_inlining

        module_typenames_to_export_as_functions: Set[str] = set()
        if isinstance(model, (torch.nn.Module, torch.jit.ScriptModule)):
            module_typenames_to_export_as_functions = _setup_trace_module_map(
                model, export_modules_as_functions
            )

        with exporter_context(model, training, verbose):
            val_keep_init_as_ip = _decide_keep_init_as_input(
                keep_initializers_as_inputs,
                operator_export_type,
                opset_version,
            )
            val_add_node_names = _decide_add_node_names(
                add_node_names, operator_export_type
            )
            val_do_constant_folding = _decide_constant_folding(
                do_constant_folding, operator_export_type, training
            )
            # Normally f can be a file-like object, but for large models, the external data format requires a
            # valid `model_file_location`. Code in export.cpp will enforce this.
            if isinstance(f, str):
                model_file_location = f
            else:
                model_file_location = ""
            args = _decide_input_format(model, args)
            if dynamic_axes is None:
                dynamic_axes = {}
            _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)

            graph, params_dict, torch_out = _model_to_graph(
                model,
                args,
                verbose,
                input_names,
                output_names,
                operator_export_type,
                val_do_constant_folding,
                fixed_batch_size=fixed_batch_size,
                training=training,
                dynamic_axes=dynamic_axes,
            )

            # TODO: Don't allocate a in-memory string for the protobuf
            defer_weight_export = (
                export_type is not _exporter_states.ExportTypes.PROTOBUF_FILE
            )
            if custom_opsets is None:
                custom_opsets = {}

            _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)
            node_attr_to_name = {}  # type: ignore[var-annotated]
            if module_typenames_to_export_as_functions:
                # NOTE: cannot call DCE after this pass. DCE will remove function definition nodes.
                node_attr_to_name = _C._jit_pass_onnx_function_extraction(
                    graph,
                    module_typenames_to_export_as_functions,
                    list(params_dict.keys()),
                )

            if keep_initializers_as_inputs is not True:
                params_dict = _C._jit_pass_onnx_deduplicate_initializers(  # type: ignore[assignment]
                    graph,
                    params_dict,
                    getattr(model, "training", False),  # type: ignore[arg-type]
                )
            _C._jit_pass_onnx_assign_scoped_names_for_node_and_value(graph)
            if export_params:
                (
                    proto,
                    export_map,
                    val_use_external_data_format,
                    node_names,
                ) = graph._export_onnx(  # type: ignore[attr-defined]
                    params_dict,
                    opset_version,
                    dynamic_axes,
                    defer_weight_export,
                    operator_export_type,
                    not verbose,
                    val_keep_init_as_ip,
                    custom_opsets,
                    val_add_node_names,
                    model_file_location,
                    node_attr_to_name,
                )
            else:
                (
                    proto,
                    export_map,
                    val_use_external_data_format,
                    node_names,
                ) = graph._export_onnx(  # type: ignore[attr-defined]
                    {},
                    opset_version,
                    dynamic_axes,
                    False,
                    operator_export_type,
                    not verbose,
                    val_keep_init_as_ip,
                    custom_opsets,
                    val_add_node_names,
                    model_file_location,
                    node_attr_to_name,
                )
            # insert function_proto into model_proto.
            proto = onnx_proto_utils._add_onnxscript_fn(
                proto,
                custom_opsets,
            )
            if verbose:
                torch.onnx.log("Exported graph: ", graph)
            onnx_proto_utils._export_file(proto, f, export_type, export_map)
            # The ONNX checker only works for ONNX graph. So if the operator_export_type is not ONNX,
            # we can skip this check.
            # If large model format export is enabled, proto will only contain data location instead of
            # raw data and _check_onnx_proto() will fail because it can only handle the raw ONNX proto
            # string in memory.
            if (operator_export_type is _C_onnx.OperatorExportTypes.ONNX) and (
                not val_use_external_data_format
            ):
                try:
                    _C._check_onnx_proto(proto)
                except RuntimeError as e:
                    raise errors.CheckerError(e) from e
    finally:
        assert GLOBALS.in_onnx_export
        GLOBALS.in_onnx_export = False
        GLOBALS.autograd_inlining = _autograd_inlining_previous
        _reset_trace_module_map()

    return torch_out, params_dict


@_beartype.beartype
def _apply_friendly_debug_names(graph, params):
    for n in graph.nodes():
        for v in n.inputs():
            old_name = v.debugName()
            if old_name != str(v.unique()):
                continue
            new_name = f"{n.kind()}_{v.unique()}"
            v.setDebugName(new_name)
            if old_name in params:
                params[new_name] = params.pop(old_name)


@_beartype.beartype
def _set_input_and_output_names(graph, input_names, output_names):
    @_beartype.beartype
    def set_names(node_list, name_list, descriptor):
        if name_list is None:
            return
        if len(name_list) > len(node_list):
            raise RuntimeError(
                "number of %s names provided (%d) exceeded number of %ss (%d)"
                % (descriptor, len(name_list), descriptor, len(node_list))
            )

        # Mark if the output node DebugName is set before.
        output_node_set = set()
        for i, (name, node) in enumerate(zip(name_list, node_list)):
            # Duplicated output node, insert onnx::Identity to avoid setting the same DebugName after setDebugName().
            if descriptor == "output":
                if node in output_node_set:
                    identity_node = graph.create("onnx::Identity")
                    identity_node.insertAfter(node.node())
                    identity_node.addInput(node)
                    identity_node.output().setType(node.type())
                    graph.return_node().replaceInput(i, identity_node.output())
                    node = identity_node.output()
                output_node_set.add(node)

            if node.debugName() != name:
                node.setDebugName(name)

    set_names(list(graph.inputs()), input_names, "input")
    set_names(list(graph.outputs()), output_names, "output")


@_beartype.beartype
def _run_symbolic_method(g, op_name, symbolic_fn, args):
    r"""
    This trampoline function gets invoked for every symbolic method
    call from C++.
    """
    try:
        graph_context = jit_utils.GraphContext(
            graph=g,
            block=g.block(),
            opset=GLOBALS.export_onnx_opset_version,
            original_node=None,  # type: ignore[arg-type]
            params_dict=_params_dict,
            env={},
            values_in_env=set(),
            new_nodes=[],
        )
        return symbolic_fn(graph_context, *args)
    except TypeError as e:
        # Handle the specific case where we didn't successfully dispatch
        # to symbolic_fn.  Otherwise, the backtrace will have the clues
        # you need.
        e.args = (f"{e.args[0]} (occurred when translating {op_name})",)
        raise


@_beartype.beartype
def _add_block(node: _C.Node) -> _C.Block:
    return node.addBlock()


@_beartype.beartype
def _add_input_to_block(block: _C.Block):
    return block.addInputToBlock()  # type: ignore[attr-defined]


@_beartype.beartype
def _add_output_to_block(block: _C.Block, value: _C.Value) -> int:
    return block.registerOutput(value)


@_beartype.beartype
def _should_aten_fallback(
    name: str, opset_version: int, operator_export_type: _C_onnx.OperatorExportTypes
):
    # For all builds, if domain=="aten" and operator_export_type==ONNX_ATEN,
    #   an aten::ATen operator is created regardless of symbolics existence

    is_exportable_aten_op = registration.registry.is_registered_op(name, opset_version)
    is_onnx_aten_export = operator_export_type == _C_onnx.OperatorExportTypes.ONNX_ATEN
    is_aten_fallback_export = (
        operator_export_type == _C_onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK
    )
    is_caffe2_build = _C_onnx._CAFFE2_ATEN_FALLBACK

    if not name.startswith("aten::"):
        return False

    if is_caffe2_build:
        if (
            is_onnx_aten_export or is_aten_fallback_export
        ) and not is_exportable_aten_op:
            return True
    else:
        if is_onnx_aten_export or (
            is_aten_fallback_export and not is_exportable_aten_op
        ):
            return True

    return False


@_beartype.beartype
def _need_symbolic_context(symbolic_fn: Callable) -> bool:
    """Checks if the first argument to symbolic_fn is annotated as type `torch.onnx.SymbolicContext`."""
    params = tuple(inspect.signature(symbolic_fn).parameters.values())
    # When the annotation is postpone-evaluated, the annotation is a string
    # and not a type. We need to use get_type_hints to get the real type.
    if not params:
        return False
    first_param_name = params[0].name
    type_hints = typing.get_type_hints(symbolic_fn)
    if first_param_name not in type_hints:
        return False
    param_type = type_hints[first_param_name]
    return issubclass(param_type, _exporter_states.SymbolicContext)


@_beartype.beartype
def _symbolic_context_handler(symbolic_fn: Callable) -> Callable:
    """Decorator that provides the symbolic context to the symbolic function if needed."""
    if _need_symbolic_context(symbolic_fn):
        # TODO(justinchuby): Update the module name of GraphContext when it is public
        warnings.warn(
            "The first argument to symbolic functions is deprecated in 1.13 and will be "
            "removed in the future. Please annotate treat the first argument (g) as GraphContext "
            "and use context information from the object instead.",
            category=FutureWarning,
        )

        def wrapper(graph_context: jit_utils.GraphContext, *args, **kwargs):
            symbolic_context = _exporter_states.SymbolicContext(
                params_dict=graph_context.params_dict,
                env=graph_context.env,
                cur_node=graph_context.original_node,
                onnx_block=graph_context.block,
            )
            return symbolic_fn(symbolic_context, graph_context, *args, **kwargs)

        return wrapper
    return symbolic_fn


@_beartype.beartype
def _get_aten_op_overload_name(n: _C.Node) -> str:
    # Returns `overload_name` attribute to ATen ops on non-Caffe2 builds
    schema = n.schema()
    if not schema.startswith("aten::") or symbolic_helper.is_caffe2_aten_fallback():
        return ""
    return _C.parse_schema(schema).overload_name


@_beartype.beartype
def _run_symbolic_function(
    graph: _C.Graph,
    block: _C.Block,
    node: _C.Node,
    inputs: Any,
    env: Dict[_C.Value, _C.Value],
    values_in_env: Set[_C.Value],
    new_nodes: List[_C.Node],
    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,
) -> Optional[Union[_C.Value, Sequence[Optional[_C.Value]]]]:
    """Runs a symbolic function.

    The function is used in C++ to export the node to ONNX.

    Returns:
        A single or a tuple of Values.
        None when the node gets cloned as is into the new graph.
    """

    opset_version = GLOBALS.export_onnx_opset_version

    # See Note [Export inplace]
    node_kind = node.kind()
    if node_kind.endswith("_"):
        # Treat relu_ -> relu; add_ -> add etc.
        ns_op_name = node_kind[:-1]
    else:
        ns_op_name = node_kind

    namespace, op_name = jit_utils.parse_node_kind(ns_op_name)

    graph_context = jit_utils.GraphContext(
        graph=graph,
        block=block,
        opset=opset_version,
        original_node=node,
        params_dict=_params_dict,
        env=env,
        values_in_env=values_in_env,
        new_nodes=new_nodes,
    )

    # Direct ATen export requested
    if _should_aten_fallback(ns_op_name, opset_version, operator_export_type):
        attrs = {
            k + "_" + node.kindOf(k)[0]: symbolic_helper._node_get(node, k)
            for k in node.attributeNames()
        }
        outputs = node.outputsSize()
        attrs["outputs"] = outputs
        return graph_context.aten_op(
            op_name,
            *inputs,
            overload_name=_get_aten_op_overload_name(node),
            **attrs,
        )

    try:
        # Caffe2-specific: Quantized op symbolics are registered for opset 9 only.
        if symbolic_helper.is_caffe2_aten_fallback() and opset_version == 9:
            symbolic_caffe2.register_quantized_ops("caffe2", opset_version)

        if namespace == "quantized" and symbolic_helper.is_caffe2_aten_fallback():
            domain = "caffe2"
        else:
            domain = namespace
        symbolic_function_name = f"{domain}::{op_name}"

        symbolic_function_group = registration.registry.get_function_group(
            symbolic_function_name
        )
        if symbolic_function_group is not None:
            symbolic_fn = symbolic_function_group.get(opset_version)
            if symbolic_fn is not None:
                # TODO Wrap almost identical attrs assignment or comment the difference.
                attrs = {
                    k: symbolic_helper._node_get(node, k) for k in node.attributeNames()
                }
                return symbolic_fn(graph_context, *inputs, **attrs)

        attrs = {
            k + "_" + node.kindOf(k)[0]: symbolic_helper._node_get(node, k)
            for k in node.attributeNames()
        }
        if namespace == "onnx":
            # Clone node to trigger ONNX shape inference
            return graph_context.op(
                op_name, *inputs, **attrs, outputs=node.outputsSize()
            )  # type: ignore[attr-defined]

        raise errors.UnsupportedOperatorError(
            symbolic_function_name,
            opset_version,
            symbolic_function_group.get_min_supported()
            if symbolic_function_group
            else None,
        )

    except RuntimeError:
        if operator_export_type == _C_onnx.OperatorExportTypes.ONNX_FALLTHROUGH:
            return None
        elif (
            operator_export_type == _C_onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK
            and not symbolic_helper.is_caffe2_aten_fallback()
        ):
            # Emit ATen op for non-Caffe2 builds when `operator_export_type==ONNX_ATEN_FALLBACK`
            attrs = {
                k + "_" + node.kindOf(k)[0]: symbolic_helper._node_get(node, k)
                for k in node.attributeNames()
            }
            return graph_context.aten_op(
                op_name,
                *inputs,
                overload_name=_get_aten_op_overload_name(node),
                **attrs,
            )
        raise
    except TypeError as e:
        # Handle the specific case where we didn't successfully dispatch.
        # Otherwise, the backtrace will have the clues you need.
        e.args = (f"{e.args[0]} \n(Occurred when translating {op_name}).",)
        raise


@_beartype.beartype
def _verify_custom_op_name(symbolic_name: str):
    if not re.match(r"^[a-zA-Z0-9-_]+::[a-zA-Z-_]+[a-zA-Z0-9-_]*$", symbolic_name):
        raise errors.OnnxExporterError(
            f"Failed to register operator {symbolic_name}. "
            "The symbolic name must match the format domain::name, "
            "and should start with a letter and contain only "
            "alphanumerical characters"
        )

    ns, _ = jit_utils.parse_node_kind(symbolic_name)
    if ns == "onnx":
        raise ValueError(
            f"Failed to register operator {symbolic_name}. {ns} domain cannot be modified."
        )


@_beartype.beartype
def register_custom_op_symbolic(
    symbolic_name: str,
    symbolic_fn: Callable,
    opset_version: int,
):
    """Registers a symbolic function for a custom operator.

    When the user registers symbolic for custom/contrib ops,
    it is highly recommended to add shape inference for that operator via setType API,
    otherwise the exported graph may have incorrect shape inference in some extreme cases.
    An example of setType is `test_aten_embedding_2` in `test_operators.py`.

    See "Custom Operators" in the module documentation for an example usage.

    Args:
        symbolic_name (str): The name of the custom operator in "<domain>::<op>"
            format.
        symbolic_fn (Callable): A function that takes in the ONNX graph and
            the input arguments to the current operator, and returns new
            operator nodes to add to the graph.
        opset_version (int): The ONNX opset version in which to register.
    """
    if symbolic_name.startswith("::"):
        symbolic_name = f"aten{symbolic_name}"

    _verify_custom_op_name(symbolic_name)

    registration.custom_onnx_symbolic(
        symbolic_name,
        opset_version,
        decorate=[
            _symbolic_context_handler,
        ],
    )(symbolic_fn)


@_beartype.beartype
def unregister_custom_op_symbolic(symbolic_name: str, opset_version: int):
    """Unregisters ``symbolic_name``.

    See "Custom Operators" in the module documentation for an example usage.

    Args:
        symbolic_name (str): The name of the custom operator in "<domain>::<op>"
            format.
        opset_version (int): The ONNX opset version in which to unregister.
    """
    if symbolic_name.startswith("::"):
        symbolic_name = f"aten{symbolic_name}"

    _verify_custom_op_name(symbolic_name)

    registration.registry.unregister(symbolic_name, opset_version)


@_beartype.beartype
def _validate_dynamic_axes(dynamic_axes, model, input_names, output_names):
    """Ensures dynamic axes argument is follows the expected format."""
    if len(dynamic_axes) == 0:
        return

    if hasattr(model, "graph"):
        # Extracting set of valid input/output names that shall be used for dynamic_axes
        if (input_names is None) or len(input_names) == 0:
            input_names = [x.debugName() for x in model.graph.inputs()]
        if (output_names is None) or len(output_names) == 0:
            output_names = [y.debugName() for y in model.graph.outputs()]

    valid_names = set((input_names or []) + (output_names or []))

    # If dynamic axes are provided as a list rather than dictionary, they should
    # first get converted to a dictionary in expected format. If desired axes names
    # are not provided for dynamic axes, automatic names shall be generated for
    # provided dynamic axes of specified input/output
    for key, value in dynamic_axes.items():
        if key not in valid_names:
            warnings.warn(
                f"Provided key {key} for dynamic axes is not a valid input/output name"
            )
        if isinstance(value, list):
            warnings.warn(
                "No names were found for specified dynamic axes of provided input."
                f"Automatically generated names will be applied to each dynamic axes of input {key}"
            )

            value_dict = {}
            for i, x in enumerate(value):
                if not isinstance(x, int):
                    raise ValueError(
                        "The type of axis index is expected to be an integer"
                    )
                if x in value_dict:
                    warnings.warn(
                        f"Duplicate dynamic axis index {x} was provided for input {key}."
                    )
                else:
                    value_dict[x] = str(key) + "_dynamic_axes_" + str(i + 1)
            dynamic_axes[key] = value_dict


def model_signature(model: Union[torch.nn.Module, Callable]) -> inspect.Signature:
    return inspect.signature(
        model.forward if isinstance(model, torch.nn.Module) else model
    )


================================================
FILE: module/sd_tensorrt.py
================================================
from .tensorrt_wrapper import CallableTensorRTEngineWrapper


class CallableTensorRTEngineWrapperDynamicShapeVAEDecode(CallableTensorRTEngineWrapper):
    args_name = [
        "samples",
    ]

    def gen_onnx_args(self, kwargs, module=None):
        args_name = []
        args = []
        for arg_name in self.args_name:
            args.append(kwargs.get(arg_name, None))
            if args[-1] != None:
                args_name.append(arg_name)
        dynamic_axes = {
            "samples": {2: "H", 3: "W"},
        }
        for k in list(dynamic_axes.keys()):
            if not k in args_name:
                dynamic_axes.pop(k)
        return args, args_name, dynamic_axes

    def gen_tensorrt_args(self, kwargs):
        input_shape_info = {}
        feed_dict = {}
        for arg_name in self.args_name:
            arg = kwargs.get(arg_name, None)
            if arg != None:
                feed_dict[arg_name] = arg
                input_shape_info[arg_name] = tuple(arg.shape)

        return feed_dict, input_shape_info

    def gen_tensorrt_args_profile(self, input_shape_info):
        min_input_profile_info = {
            "samples": {2: 2, 3: 2},
        }
        input_profile_info = {}
        for arg_name, shape_info in input_shape_info.items():
            min_shape_config = min_input_profile_info.get(arg_name, None)
            min_shape_info = list(shape_info)
            if min_shape_config != None:
                for k, v in min_shape_config.items():
                    min_shape_info[k] = v
            input_profile_info[arg_name] = [
                tuple(min_shape_info),
                shape_info,
                shape_info,
            ]

        return input_profile_info


================================================
FILE: module/sfast_pipeline_compiler.py
================================================
import functools
import logging
from dataclasses import dataclass

import torch
from sfast.compilers.diffusion_pipeline_compiler import (
    _enable_xformers,
    _modify_model,
)
from sfast.cuda.graphs import make_dynamic_graphed_callable
from sfast.jit import utils as jit_utils
from sfast.jit.trace_helper import trace_with_kwargs

from .comfy_trace.model_base import BaseModelApplyModelModuleFactory

logger = logging.getLogger()


@dataclass
class TracedModuleCacheItem:
    module: object
    patch_id: int
    device: str


class LazyTraceModule:
    traced_modules = {}

    def __init__(self, config=None, patch_id=None, **kwargs_) -> None:
        self.config = config
        self.patch_id = patch_id
        self.kwargs_ = kwargs_
        self.modify_model = functools.partial(
            _modify_model,
            enable_cnn_optimization=config.enable_cnn_optimization,
            prefer_lowp_gemm=config.prefer_lowp_gemm,
            enable_triton=config.enable_triton,
            enable_triton_reshape=config.enable_triton,
            memory_format=config.memory_format,
        )
        self.cuda_graph_modules = {}

    def ts_compiler(
        self,
        m,
    ):
        with torch.jit.optimized_execution(True):
            if self.config.enable_jit_freeze:
                # raw freeze causes Tensor reference leak
                # because the constant Tensors in the GraphFunction of
                # the compilation unit are never freed.
                m.eval()
                m = jit_utils.better_freeze(m)
            self.modify_model(m)

        if self.config.enable_cuda_graph:
            m = make_dynamic_graphed_callable(m)
        return m

    def __call__(self, model_function, /, **kwargs):
        module_factory = BaseModelApplyModelModuleFactory(model_function, kwargs)
        kwargs = module_factory.get_converted_kwargs()
        key = module_factory.gen_cache_key()

        traced_module = self.cuda_graph_modules.get(key)
        if traced_module is None and not (
            self.config.enable_cuda_graph or self.config.enable_jit_freeze
        ):
            traced_module_cache = self.traced_modules.get(key)
            if not traced_module_cache is None:
                if (
                    traced_module_cache.patch_id != self.patch_id
                    or traced_module_cache.device == "meta"
                ):
                    with module_factory.converted_module_context() as (
                        m_model,
                        m_kwargs,
                    ):
                        next(
                            next(traced_module_cache.module.children()).children()
                        ).load_state_dict(
                            m_model.state_dict(), strict=False, assign=True
                        )

                    traced_module_cache.device = None
                    traced_module_cache.patch_id = self.patch_id
                traced_module = traced_module_cache.module

        if traced_module is None:
            with module_factory.converted_module_context() as (m_model, m_kwargs):
                logger.info(
                    f'Tracing {getattr(m_model, "__name__", m_model.__class__.__name__)}'
                )
                traced_m, call_helper = trace_with_kwargs(
                    m_model, None, m_kwargs, **self.kwargs_
                )

            traced_m = self.ts_compiler(traced_m)
            traced_module = call_helper(traced_m)
            if self.config.enable_cuda_graph or self.config.enable_jit_freeze:
                self.cuda_graph_modules[key] = traced_module
            else:
                self.traced_modules[key] = TracedModuleCacheItem(
                    module=traced_module, patch_id=self.patch_id, device=None
                )

        return traced_module(**kwargs)

    def to_empty(self):
        for v in self.traced_modules.values():
            v.module.to_empty(device="meta")
            v.device = "meta"


def build_lazy_trace_module(config, device, patch_id):
    config.enable_cuda_graph = config.enable_cuda_graph and device.type == "cuda"

    if config.enable_xformers:
        _enable_xformers(None)

    return LazyTraceModule(
        config=config,
        patch_id=patch_id,
        check_trace=True,
        strict=True,
    )


================================================
FILE: module/tensorrt_utilities.py
================================================
#
# Copyright 2022 The HuggingFace Inc. team.
# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import copy
from collections import OrderedDict
from logging import warning

import numpy as np
import tensorrt as trt
import torch
import zstandard
from polygraphy import util
from polygraphy.backend.trt import (
    ModifyNetworkOutputs,
    Profile,
    bytes_from_engine,
    engine_from_bytes,
    engine_from_network,
    network_from_onnx_bytes,
    network_from_onnx_path,
)
from polygraphy.logger import G_LOGGER
from torch.cuda import nvtx
from tqdm import tqdm

TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
G_LOGGER.module_severity = G_LOGGER.VERBOSE

# Map of numpy dtype -> torch dtype
numpy_to_torch_dtype_dict = {
    np.uint8: torch.uint8,
    np.int8: torch.int8,
    np.int16: torch.int16,
    np.int32: torch.int32,
    np.int64: torch.int64,
    np.float16: torch.float16,
    np.float32: torch.float32,
    np.float64: torch.float64,
    np.complex64: torch.complex64,
    np.complex128: torch.complex128,
}
if np.version.full_version >= "1.24.0":
    numpy_to_torch_dtype_dict[np.bool_] = torch.bool
else:
    numpy_to_torch_dtype_dict[np.bool] = torch.bool

# Map of torch dtype -> numpy dtype
torch_to_numpy_dtype_dict = {
    value: key for (key, value) in numpy_to_torch_dtype_dict.items()
}


class TQDMProgressMonitor(trt.IProgressMonitor):
    def __init__(self):
        trt.IProgressMonitor.__init__(self)
        self._active_phases = {}
        self._step_result = True
        self.max_indent = 5

    def phase_start(self, phase_name, parent_phase, num_steps):
        leave = False
        try:
            if parent_phase is not None:
                nbIndents = (
                    self._active_phases.get(parent_phase, {}).get(
                        "nbIndents", self.max_indent
                    )
                    + 1
                )
                if nbIndents >= self.max_indent:
                    return
            else:
                nbIndents = 0
                leave = True
            self._active_phases[phase_name] = {
                "tq": tqdm(
                    total=num_steps, desc=phase_name, leave=leave, position=nbIndents
                ),
                "nbIndents": nbIndents,
                "parent_phase": parent_phase,
            }
        except KeyboardInterrupt:
            # The phase_start callback cannot directly cancel the build, so request the cancellation from within step_complete.
            _step_result = False

    def phase_finish(self, phase_name):
        try:
            if phase_name in self._active_phases.keys():
                self._active_phases[phase_name]["tq"].update(
                    self._active_phases[phase_name]["tq"].total
                    - self._active_phases[phase_name]["tq"].n
                )

                parent_phase = self._active_phases[phase_name].get("parent_phase", None)
                while parent_phase is not None:
                    self._active_phases[parent_phase]["tq"].refresh()
                    parent_phase = self._active_phases[parent_phase].get(
                        "parent_phase", None
                    )
                if (
                    self._active_phases[phase_name]["parent_phase"]
                    in self._active_phases.keys()
                ):
                    self._active_phases[
                        self._active_phases[phase_name]["parent_phase"]
                    ]["tq"].refresh()
                del self._active_phases[phase_name]
            pass
        except KeyboardInterrupt:
            _step_result = False

    def step_complete(self, phase_name, step):
        try:
            if phase_name in self._active_phases.keys():
                self._active_phases[phase_name]["tq"].update(
                    step - self._active_phases[phase_name]["tq"].n
                )
            return self._step_result
        except KeyboardInterrupt:
            # There is no need to propagate this exception to TensorRT. We can simply cancel the build.
            return False


class Engine:
    def __init__(self, engine_path, enable_cuda_graph=False):
        self.engine_path = engine_path
        self.engine = None
        self.context = None
        self.buffers = OrderedDict()
        self.tensors = OrderedDict()
        self.shared_device_memory = None

        self.enable_cuda_graph = enable_cuda_graph
        self.cuda_graph_instance = None  # cuda graph
        self.inferred = False
        self.cuda_graph_stream = None

        self.refited_engine_byte = None

        self.last_device_memory_size = 0

    def __del__(self):
        del self.engine
        del self.context
        del self.buffers
        del self.tensors

    def refit_simple(self, onnx_model):
        print(f"Refitting TensorRT engine with {onnx_model} weights")

        refitter = trt.Refitter(self.engine, TRT_LOGGER)
        parser_refitter = trt.OnnxParserRefitter(refitter, TRT_LOGGER)
        if type(onnx_model) is bytes:
            result = parser_refitter.refit_from_bytes(onnx_model)
        else:
            result = parser_refitter.refit_from_file(onnx_model)

        if not result or not refitter.refit_cuda_engine():
            raise Exception("Failed to refit!")

    def refit_from_dict(
        self,
        refit_weights: dict[str, torch.Tensor],
        constant_refit_weights: dict[str, torch.Tensor],
    ):
        # Initialize refitter
        refitter = trt.Refitter(self.engine, TRT_LOGGER)

        refitted_weights = set()
        print(f"[I] Total refittable weights {len(refitter.get_all_weights())}.")

        # iterate through all tensorrt refittable weights
        for trt_weight_name in refitter.get_all_weights():
            # get weight from state dict
            if trt_weight_name in refit_weights:
                refit_weight = refit_weights[trt_weight_name]
            elif trt_weight_name in constant_refit_weights:
                refit_weight = constant_refit_weights[trt_weight_name]
                # print(refit_weight)
            else:
                continue

            trt_datatype = refitter.get_weights_prototype(trt_weight_name).dtype
            if trt_datatype == trt.DataType.FLOAT:
                refit_weight = refit_weight.float()
            elif trt_datatype == trt.DataType.HALF:
                refit_weight = refit_weight.half()
            else:
                print("unhandled", trt_datatype)
                continue

            # trt.Weight and trt.TensorLocation
            trt_wt_tensor = trt.Weights(
                trt_datatype,
                refit_weight.data_ptr(),
                torch.numel(refit_weight),
            )
            trt_wt_location = (
                trt.TensorLocation.DEVICE
                if refit_weight.is_cuda
                else trt.TensorLocation.HOST
            )

            self.buffers[trt_weight_name] = refit_weight

            # apply refit
            assert refitter.set_named_weights(
                trt_weight_name, trt_wt_tensor, trt_wt_location
            )
            refitted_weights.add(trt_weight_name)

        # assert set(refitted_weights) == set(refit_weights.keys())
        if not refitter.refit_cuda_engine():
            raise Exception("Error: failed to refit new weights.")

        print(f"[I] Total refitted weights {len(refitted_weights)}.")

    def build(
        self,
        onnx_model,
        dtype,
        input_profile=None,
        enable_refit=False,
        enable_weight_streaming=False,
        enable_all_tactics=False,
        timing_cache=None,
        update_output_names=None,
    ):
        print(f"Building TensorRT engine for : {self.engine_path}")
        config_kwargs = {}
        if not enable_all_tactics:
            config_kwargs["tactic_sources"] = []

        if type(onnx_model) is bytes:
            network = network_from_onnx_bytes(
                onnx_model,
                flags=[
                    trt.OnnxParserFlag.NATIVE_INSTANCENORM,
                ],
                strongly_typed=enable_weight_streaming,
            )
        else:
            network = network_from_onnx_path(
                onnx_model,
                flags=[
                    trt.OnnxParserFlag.NATIVE_INSTANCENORM,
                ],
                strongly_typed=enable_weight_streaming,
            )
        if update_output_names:
            print(f"Updating network outputs to {update_output_names}")
            network = ModifyNetworkOutputs(network, update_output_names)

        input_names = set()
        nd = network[1]
        for i in range(nd.num_inputs):
            input_names.add(nd.get_input(i).name)

        p = [Profile()]
        if input_profile:
            p = [Profile() for i in range(len(input_profile))]
            for _p, i_profile in zip(p, input_profile):
                for name, dims in i_profile.items():
                    if name not in input_names:
                        continue
                    assert len(dims) == 3
                    _p.add(name, min=dims[0], opt=dims[1], max=dims[2])

        builder = network[0]
        config = builder.create_builder_config()
        config.progress_monitor = TQDMProgressMonitor()

        if not enable_weight_streaming:
            if dtype == torch.float16:
                config.set_flag(trt.BuilderFlag.FP16)
            elif dtype == torch.bfloat16:
                config.set_flag(trt.BuilderFlag.BF16)

        if enable_refit:
            config.set_flag(trt.BuilderFlag.STRIP_PLAN)
            # Slower than REFIT_IDENTICAL
            # config.set_flag(trt.BuilderFlag.REFIT)
            config.set_flag(trt.BuilderFlag.REFIT_IDENTICAL)

        if enable_weight_streaming:
            config.set_flag(trt.BuilderFlag.WEIGHT_STREAMING)
        # config.set_preview_feature(
        #     trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805, False
        # )
        # config.set_tactic_sources(1 << int(trt.TacticSource.CUBLAS) | 1 << int(trt.TacticSource.CUBLAS_LT))

        cache = None
        try:
            with util.LockFile(timing_cache):
                timing_cache_data = util.load_file(
                    timing_cache, description="tactic timing cache"
                )
                cache = config.create_timing_cache(timing_cache_data)
        except FileNotFoundError:
            warning(
                "Timing cache file {} not found, falling back to empty timing cache.".format(
                    timing_cache
                )
            )
        if cache is not None:
            config.set_timing_cache(cache, ignore_mismatch=True)

        profiles = copy.deepcopy(p)
        for profile in profiles:
            # Last profile is used for set_calibration_profile.
            calib_profile = profile.fill_defaults(network[1]).to_trt(
                builder, network[1]
            )
            config.add_optimization_profile(calib_profile)

        try:
            self.engine = engine_from_network(
                network,
                config,
                save_timing_cache=timing_cache,
            )
        except Exception as e:
            raise Exception(f"Failed to build engine: {e}")
        self.update_binding_set()

    def save_engine(self):
        print(f"Saveing TensorRT engine: {self.engine_path}")
        with zstandard.open(self.engine_path, "wb") as zwfp:
            zwfp.write(bytes_from_engine(self.engine))

    def load(self):
        if self.refited_engine_byte is not None:
            print("Loading TensorRT engine from byte cache.")
            self.engine = engine_from_bytes(self.refited_engine_byte)
            self.refited_engine_byte = None
        else:
            print(f"Loading TensorRT engine: {self.engine_path}")
            with zstandard.open(self.engine_path, "rb") as zrfp:
                self.engine = engine_from_bytes(zrfp.read())
        self.update_binding_set()

    def update_binding_set(self):
        self.binding_set = set()
        for idx in range(self.engine.num_io_tensors):
            self.binding_set.add(self.engine[idx])

    def offload(self, offload_context_only=False):
        if not offload_context_only and self.refited_engine_byte is None:
            serialization_config = self.engine.create_serialization_config()
            serialization_config.flags &= ~(
                1 << int(trt.SerializationFlag.EXCLUDE_WEIGHTS)
            )
            self.refited_engine_byte = self.engine.serialize_with_config(
                serialization_config
            )
            self.buffers.clear()

        del self.context
        self.context = None

        if not offload_context_only:
            del self.engine
            self.engine = None

        self.tensors = OrderedDict()
        self.shared_device_memory = None

        self.cuda_graph_instance = None
        self.inferred = False
        self.cuda_graph_stream = None

    def is_weight_streaming_engine(self):
        return self.engine.streamable_weights_size > 0

    def activate(
        self, reuse_device_memory=None, memory_limit_size=1000 * 1000 * 1000 * 3
    ):
        if self.context is None:
            if self.is_weight_streaming_engine():

                def update_budget_size():
                    budget_size = memory_limit_size - self.engine.device_memory_size_v2
                    if budget_size < 0:
                        budget_size = 0
                    self.engine.weight_streaming_budget_v2 = min(
                        budget_size, self.engine.streamable_weights_size
                    )

                # if weight_streaming enable , device_memory_size_v2 will change.
                update_budget_size()
                update_budget_size()

            if reuse_device_memory:
                self.context = (
                    self.engine.create_execution_context_without_device_memory()
                )
            #    self.context.device_memory = reuse_device_memory
            else:
                self.context = self.engine.create_execution_context()
            assert self.context is not None

    def get_device_memory_size(self):
        if self.engine is not None:
            if self.is_weight_streaming_engine():
                self.last_device_memory_size = (
                    self.engine.device_memory_size_v2
                    + self.engine.weight_streaming_budget_v2
                )
            else:
                self.last_device_memory_size = self.engine.device_memory_size_v2
        return self.last_device_memory_size

    def allocate_buffers(
        self, shape_dict=None, device="cuda", allocate_input_buffers=True
    ):
        nvtx.range_push("allocate_buffers")
        for idx in range(self.engine.num_io_tensors):
            tensor_name = self.engine.get_tensor_name(idx)

            if shape_dict and tensor_name in shape_dict:
                shape = shape_dict[tensor_name].shape
            else:
                shape = self.context.get_tensor_shape(tensor_name)
            shape = list(shape)
            if (
                tensor_name in self.tensors
                and list(self.tensors[tensor_name].shape) == shape
            ):
                continue
            dtype = trt.nptype(self.engine.get_tensor_dtype(tensor_name))
            if self.engine.get_tensor_mode(tensor_name) == trt.TensorIOMode.INPUT:
                self.context.set_input_shape(tensor_name, shape)
                if not allocate_input_buffers or tensor_name not in shape_dict:
                    continue
            tensor = torch.empty(
                tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype], device=device
            )
            self.tensors[tensor_name] = tensor
        if self.shared_device_memory is None:
            self.shared_device_memory = torch.empty(
                self.engine.device_memory_size_v2, dtype=torch.uint8, device=device
            )
            self.context.set_device_memory(
                self.shared_device_memory.data_ptr(), self.engine.device_memory_size_v2
            )
        nvtx.range_pop()

    def release_buffers(self):
        self.tensors = OrderedDict()

    def infer(
        self,
        feed_dict,
        stream: torch.cuda.Stream,
        stream_sync=False,
        free_shared_device_memory=True,
    ):
        nvtx.range_push("set_tensors")
        for name, buf in feed_dict.items():
            if name in self.tensors:
                self.tensors[name].copy_(buf)
            elif name in self.binding_set:
                dtype = trt.nptype(self.engine.get_tensor_dtype(name))
                self.tensors[name] = buf.to(dtype=numpy_to_torch_dtype_dict[dtype])

        for name, tensor in self.tensors.items():
            self.context.set_tensor_address(name, tensor.data_ptr())
        nvtx.range_pop()
        nvtx.range_push("execute")
        if self.enable_cuda_graph and self.cuda_graph_instance is not None:
            self.cuda_graph_instance.replay()
        elif self.enable_cuda_graph and self.inferred:
            # capture cuda graph
            infer_graph = torch.cuda.CUDAGraph()
            self.cuda_graph_stream = torch.cuda.Stream()

            with torch.cuda.graph(infer_graph, stream=self.cuda_graph_stream):
                noerror = self.context.execute_async_v3(
                    self.cuda_graph_stream.cuda_stream
                )

            if not noerror:
                raise ValueError("ERROR: inference failed.")

            self.cuda_graph_instance = infer_graph
        else:
            noerror = self.context.execute_async_v3(stream.cuda_stream)
            if not noerror:
                raise ValueError("ERROR: inference failed.")
            self.inferred = True
        nvtx.range_pop()

        if stream_sync:
            stream.synchronize()

        if not self.enable_cuda_graph and free_shared_device_memory:
            del self.shared_device_memory
            self.shared_device_memory = None

        return self.tensors

    def set_static_dict_input(self, feed_dict):
        nvtx.range_push("set_tensors")
        for name, tensor in feed_dict.items():
            dtype = trt.nptype(self.engine.get_tensor_dtype(name))
            feed_dict[name] = tensor.to(dtype=numpy_to_torch_dtype_dict[dtype])
            self.context.set_tensor_address(name, feed_dict[name].data_ptr())
        nvtx.range_pop()

    def __str__(self):
        out = ""
        for opt_profile in range(self.engine.num_optimization_profiles):
            for binding_idx in range(self.engine.num_io_tensors):
                name = self.engine.get_tensor_name(binding_idx)
                shape = self.engine.get_tensor_profile_shape(opt_profile, name)
                out += f"\t{name} = {shape}\n"
        return out


================================================
FILE: module/tensorrt_wrapper.py
================================================
import gc
import hashlib
import json
import logging
import os
import tempfile
import time
from dataclasses import dataclass, field
from typing import Any, List

import comfy.cldm.cldm
import comfy.gligen
import comfy.ldm.modules.diffusionmodules.openaimodel
import comfy.model_management
import comfy.model_patcher
import numpy
import safetensors
import safetensors.torch
import tensorrt
import torch
import torch.version
from torch.cuda import nvtx

from .comfy_trace_utilities import hash_arg
from .onnx_module_refit import (
    make_constant_params_dict_by_onnx_model,
    make_module_onnx_tensor_gen_map_by_params_dict,
    make_params_dict_by_module,
)
from .tensorrt_utilities import Engine

_logger = logging.getLogger(__name__)


@dataclass
class TensorRTEngineConfig:
    enable_cuda_graph: bool
    keep_width: int = 768
    keep_height: int = 768
    keep_batch_size: int = 2
    keep_embedding_block: int = 2
    use_dedicated_engine: bool = False


class CallableTensorRTEngineWrapper:
    def __init__(self, tensorrt_context, identification) -> None:
        self.tensorrt_context: TensorRTEngineContext = tensorrt_context
        self.identification = identification + self.__class__.__name__

        self.engine: Engine = None
        self.onnx_cache_dir = None
        self.onnx_cache = None
        self.onnx_refit_info = None

        self.module_identification = None
        self.input_shape_info = None
        self.input_profile_info = None

        self.engine_comfy_model_patcher_wrapper = None

        self.engine_cache_map = {}

    def gen_onnx_args(self, kwargs, module=None):
        args = []
        args_name = []
        for arg_name, arg in kwargs.items():
            args.append(arg)
            if arg is not None:
                args_name.append(arg_name)

        return args, args_name, None

    def gen_onnx_outputs(self, module):
        return ["output"]

    def gen_tensorrt_args(self, kwargs):
        input_shape_info = {}
        feed_dict = {}
        for arg_name, arg in kwargs.items():
            if arg is not None:
                feed_dict[arg_name] = arg
                input_shape_info[arg_name] = tuple(arg.shape)

        return feed_dict, input_shape_info

    def gen_tensorrt_args_profile(self, input_shape_info):
        return {k: [v, v, v] for k, v in input_shape_info.items()}

    def gen_tensorrt_outputs(self, output):
        return output["output"]

    def is_profile_compatible(self, input_profile_info, input_shape_info):
        if input_profile_info is None:
            return False
        if len(input_profile_info) != len(input_shape_info):
            return False
        for arg_name, shape in input_shape_info.items():
            profile = input_profile_info.get(arg_name, None)
            if profile is None:
                return False
            if len(profile[0]) != len(shape):
                return False
            for d, mind, maxd in zip(shape, profile[0], profile[2]):
                if d < mind or d > maxd:
                    return False
        return True

    def __call__(self, module: torch.nn.Module, /, **kwargs: Any) -> Any:
        feed_dict, input_shape_info = self.gen_tensorrt_args(kwargs)

        if self.engine is None or not self.is_profile_compatible(
            self.input_profile_info, input_shape_info
        ):
            self.input_shape_info = input_shape_info
            input_profile_info = self.gen_tensorrt_args_profile(input_shape_info)

            if self.tensorrt_context.identify_weight_hash:
                if self.module_identification is None:
                    self.module_identification = sha256sum_state_dict(
                        module.state_dict()
                    )

            engine_cache_key = (
                hash_arg(torch.version.__version__),
                hash_arg(tensorrt.__version__),
                hash_arg(self.tensorrt_context.unet_config),
                hash_arg(self.identification),
                hash_arg(input_profile_info),
                hash_arg(self.tensorrt_context.enable_weight_streaming),
                hash_arg(str(self.tensorrt_context.model_sampling_type)),
                hash_arg(str(self.module_identification)),
            )

            if engine_cache_key in self.engine_cache_map:
                (
                    self.engine,
                    self.engine_comfy_model_patcher_wrapper,
                ) = self.engine_cache_map[engine_cache_key]
                self.input_profile_info = input_profile_info
            else:
                engine = get_engine_with_cache(engine_cache_key)

                args, args_name, dynamic_axes = self.gen_onnx_args(
                    kwargs, module=module
                )

                onnx_cache_key = (
                    hash_arg(torch.version.__version__),
                    hash_arg(self.tensorrt_context.unet_config),
                    hash_arg(self.identification),
                    hash_arg((args_name, dynamic_axes)),
                    hash_arg(str(self.tensorrt_context.model_sampling_type)),
                    hash_arg(str(self.module_identification)),
                )
                self.onnx_refit_info = get_refit_info_cache(onnx_cache_key)

                if (
                    (engine is None)
                    or (self.onnx_refit_info is None)
                    or (not self.tensorrt_context.enable_fast_refit)
                ) and self.onnx_cache is None:
                    module.to(device=self.tensorrt_context.cuda_device)
                    self.onnx_cache_dir = tempfile.TemporaryDirectory(
                        suffix="onnx_cache_dir"
                    )
                    self.onnx_cache = os.path.join(
                        self.onnx_cache_dir.name, "onnx_cache.onnx"
                    )
                    try:
                        use_patched_export = False
                        # only change is just make its export funtion return onnx params_dict
                        if torch.version.__version__ == "2.4.0":
                            from .patched_onnx_export.utils_2_4_0 import (
                                export as patched_export,
                            )

                            use_patched_export = True
                        if use_patched_export:
                            torch_out, params_dict = patched_export(
                                module,
                                tuple(args),
                                self.onnx_cache,
                                export_params=True,
                                verbose=False,
                                do_constant_folding=False,
                                input_names=args_name,
                                output_names=self.gen_onnx_outputs(module),
                                dynamic_axes=dynamic_axes,
                                # dynamo=True
                            )
                            if self.tensorrt_context.enable_fast_refit:
                                self.onnx_refit_info = gen_refit_info(onnx_cache_key)
                                self.onnx_refit_info.tensor_gen_map = (
                                    make_module_onnx_tensor_gen_map_by_params_dict(
                                        module, params_dict
                                    )
                                )
                                self.onnx_refit_info.constant_params_dict = (
                                    make_constant_params_dict_by_onnx_model(
                                        self.onnx_cache
                                    )
                                )
                                self.onnx_refit_info.save()
                            del params_dict
                        else:
                            torch.onnx.export(
                                module,
                                tuple(args),
                                self.onnx_cache,
                                export_params=True,
                                verbose=False,
                                do_constant_folding=False,
                                input_names=args_name,
                                output_names=self.gen_onnx_outputs(module),
                                dynamic_axes=dynamic_axes,
                            )
                    except Exception as e:
                        self.onnx_cache_dir.cleanup()
                        self.onnx_cache_dir = None
                        self.onnx_cache = None
                        self.onnx_refit_info = None
                        raise e

                nvtx.range_push("offload origin model")
                module.to(device="cpu")
                gc.collect()
                comfy.model_management.soft_empty_cache()
                nvtx.range_pop()

                additional_keep_models = []
                additional_keep_models = get_additional_keep_models()

                if engine is None:
                    comfy.model_management.free_memory(
                        6 * 1024 * 1024 * 1024,
                        self.tensorrt_context.cuda_device,
                    )
                    comfy.model_management.soft_empty_cache()
                    engine = gen_engine(
                        engine_cache_key,
                        self.onnx_cache,
                        [input_profile_info],
                        self.tensorrt_context.dtype,
                        enable_weight_streaming=self.tensorrt_context.enable_weight_streaming,
                    )
                    engine.save_engine()

                self.engine = engine
                try:
                    nvtx.range_push("load engine")
                    if self.engine.engine is None:
                        self.engine.load()

                    # reserve some memory for pytorch
                    memory_limit_size = int(
                        comfy.model_management.get_total_memory()
                        - (1024 * 1024 * 1024 * 2)
                    )

                    self.engine.activate(
                        True,
                        min(
                            self.tensorrt_context.lowvram_model_memory,
                            memory_limit_size,
                        ),
                    )
                    nvtx.range_push("refit engine")
                    if (
                        self.tensorrt_context.enable_fast_refit
                        and self.onnx_refit_info is not None
                    ):
                        _logger.info("using fast refit")
                        self.engine.refit_from_dict(
                            make_params_dict_by_module(
                                module, self.onnx_refit_info.tensor_gen_map
                            ),
                            self.onnx_refit_info.constant_params_dict,
                        )
                    else:
                        self.engine.refit_simple(self.onnx_cache)
                    nvtx.range_pop()
                    self.engine_comfy_model_patcher_wrapper = (
                        TensorRTEngineComfyModelPatcherWrapper(
                            engine,
                            load_device=self.tensorrt_context.cuda_device,
                            offload_device="cpu",
                            size=self.engine.get_device_memory_size(),
                        )
                    )
                    comfy.model_management.load_models_gpu(
                        [
                            *self.tensorrt_context.keep_models,
                            self.engine_comfy_model_patcher_wrapper,
                            *get_additional_keep_models(),
                            *additional_keep_models,
                        ],
                        self.engine.get_device_memory_size(),
                    )
                    self.input_profile_info = input_profile_info
                    self.engine_cache_map[engine_cache_key] = (
                        self.engine,
                        self.engine_comfy_model_patcher_wrapper,
                    )
                    nvtx.range_pop()
                except Exception as e:
                    self.engine = None
                    gc.collect()
                    raise e

        if self.engine.context is None:
            comfy.model_management.load_models_gpu(
                [
                    *self.tensorrt_context.keep_models,
                    self.engine_comfy_model_patcher_wrapper,
                    *get_additional_keep_models(),
                ],
                self.engine.get_device_memory_size(),
            )

        self.engine.allocate_buffers(
            feed_dict,
            device=self.tensorrt_context.cuda_device,
            allocate_input_buffers=False,
        )

        output = self.engine.infer(
            feed_dict,
            self.tensorrt_context.cuda_stream,
            self.tensorrt_context.infer_cuda_stream_sync,
        )
        output = self.gen_tensorrt_outputs(output)
        self.engine.release_buffers()

        return output


class TensorRTEngineComfyModelPatcherWrapper(comfy.model_patcher.ModelPatcher):
    def patch_model_lowvram(self, device_to=None, *arg, **kwargs):
        self.patch_model(device_to, patch_weights=False)

    def patch_model(self, device_to=None, *arg, **kwargs):
        if device_to is not None:
            if self.model.engine is None:
                self.model.load()
            if self.model.context is None:
                self.model.activate(True, self.model.last_device_memory_size)
            self.current_device = device_to

        return self.model

    def unpatch_model(self, device_to=None, *arg, **kwargs):
        if device_to is not None:
            self.model.offload()
            self.current_device = device_to


def get_additional_keep_models():
    models = []
    for model in comfy.model_management.current_loaded_models:
        if isinstance(
            model.real_model, (comfy.cldm.cldm.ControlNet, comfy.gligen.Gligen)
        ):
            models.append(model.model)
    return models


@dataclass
class TensorRTEngineContext:
    cuda_device = None
    shared_device_memory = None
    cuda_stream = None
    unet_config: dict = None
    model_sampling_type = None
    model_type: str = ""
    keep_models: List = field(default_factory=lambda: [])
    dtype: object = torch.float16
    enable_weight_streaming: bool = False
    enable_fast_refit: bool = True
    infer_cuda_stream_sync: bool = False
    identify_weight_hash: bool = False
    lowvram_model_memory = 0


TIMING_CACHE_PATH = os.path.join(
    os.path.dirname(os.path.dirname(__file__)),
    "tensorrt_engine_cache",
    "timing_cache.cache",
)
if not os.path.exists(TIMING_CACHE_PATH):
    os.makedirs(os.path.dirname(TIMING_CACHE_PATH), exist_ok=True)
    with open(TIMING_CACHE_PATH, "wb") as f:
        pass


def get_key_hash(key):
    return hashlib.sha256(str(key).encode()).hexdigest()


def get_cache_path(key, dir_name):
    cache_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), dir_name)
    if not os.path.exists(cache_dir):
        os.makedirs(cache_dir, exist_ok=True)
    basename = get_key_hash(key)
    return os.path.join(cache_dir, basename)


def get_engine_path(key):
    return get_cache_path(key, "tensorrt_engine_cache") + ".trt"


def get_engine_with_cache(key):
    engine_path = get_engine_path(key)
    if os.path.exists(engine_path):
        return Engine(engine_path)
    return None


def gen_engine(key, onnx_model, input_profile, dtype, enable_weight_streaming=False):
    engine = Engine(get_engine_path(key))
    s = time.time()
    engine.build(
        onnx_model,
        dtype=dtype,
        enable_refit=True,
        timing_cache=TIMING_CACHE_PATH,
        input_profile=input_profile,
        enable_weight_streaming=enable_weight_streaming,
    )
    e = time.time()
    _logger.info(f"Time taken to build: {e-s}s")
    return engine


def get_refit_info_cache(key):
    refit_info_path = get_cache_path(key, "refit_info") + ".st"
    if os.path.exists(refit_info_path):
        return TorchTensorRTRefitInfo(refit_info_path).load()
    return None


def gen_refit_info(key):
    refit_info_path = get_cache_path(key, "refit_info") + ".st"
    return TorchTensorRTRefitInfo(refit_info_path)


class TorchTensorRTRefitInfo:
    def __init__(self, info_path) -> None:
        self.info_path = info_path
        self.tensor_gen_map = None
        self.constant_params_dict = None

    def save(self):
        safetensors.torch.save_file(
            self.constant_params_dict,
            self.info_path,
            metadata={"tensor_gen_map": json.dumps(self.tensor_gen_map)},
        )

    def load(self):
        self.constant_params_dict = safetensors.torch.load_file(self.info_path)
        with safetensors.safe_open(self.info_path, "torch") as st:
            if st.metadata() is not None:
                self.tensor_gen_map = json.loads(st.metadata()["tensor_gen_map"])
        return self


def sha256sum_state_dict(state_dict: dict[str, torch.Tensor]):
    hasher = hashlib.sha256()

    for k, v in state_dict.items():
        tensor_bytes = v.cpu().detach().numpy().astype(numpy.float16).data.tobytes()
        hasher.update(tensor_bytes)

    return hasher.hexdigest()


================================================
FILE: node.py
================================================
import torch
from sfast.compilers.diffusion_pipeline_compiler import CompilationConfig

from .module.sfast_pipeline_compiler import build_lazy_trace_module


def is_cuda_malloc_async():
    return "cudaMallocAsync" in torch.cuda.get_allocator_backend()


def gen_stable_fast_config():
    config = CompilationConfig.Default()
    # xformers and triton are suggested for achieving best performance.
    # It might be slow for triton to generate, compile and fine-tune kernels.
    try:
        import xformers

        config.enable_xformers = True
    except ImportError:
        print("xformers not installed, skip")
    try:
        import triton

        config.enable_triton = True
    except ImportError:
        print("triton not installed, skip")

    if config.enable_triton and is_cuda_malloc_async():
        print("disable stable fast triton because of cudaMallocAsync")
        config.enable_triton = False

    # CUDA Graph is suggested for small batch sizes.
    # After capturing, the model only accepts one fixed image size.
    # If you want the model to be dynamic, don't enable it.
    config.enable_cuda_graph = True
    # config.enable_jit_freeze = False
    return config


class StableFastPatch:
    def __init__(self, model, config):
        self.model = model
        self.config = config
        self.stable_fast_model = None

    def __deepcopy__(self, memo=None):
        return self

    def __call__(self, model_function, params):
        input_x = params.get("input")
        timestep_ = params.get("timestep")
        c = params.get("c")

        # disable with accelerate for now
        if hasattr(model_function.__self__, "hf_device_map"):
            return model_function(input_x, timestep_, **c)

        if self.stable_fast_model is None:
            self.stable_fast_model = build_lazy_trace_module(
                self.config,
                input_x.device,
                id(self),
            )

        return self.stable_fast_model(
            model_function, input_x=input_x, timestep=timestep_, **c
        )

    def to(self, device):
        if type(device) == torch.device:
            if self.config.enable_cuda_graph or self.config.enable_jit_freeze:
                if device.type == "cpu":
                    # comfyui tell we should move to cpu. but we cannt do it with cuda graph and freeze now.
                    del self.stable_fast_model
                    self.stable_fast_model = None
                    print(
                        "\33[93mWarning: Your graphics card doesn't have enough video memory to keep the model. If you experience a noticeable delay every time you start sampling, please consider disable enable_cuda_graph.\33[0m"
                    )
            else:
                if self.stable_fast_model != None and device.type == "cpu":
                    self.stable_fast_model.to_empty()
        return self


class ApplyStableFastUnet:
    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {
                "model": ("MODEL",),
                "enable_cuda_graph": ("BOOLEAN", {"default": True}),
            }
        }

    RETURN_TYPES = ("MODEL",)
    FUNCTION = "apply_stable_fast"

    CATEGORY = "loaders"

    def apply_stable_fast(self, model, enable_cuda_graph):
        config = gen_stable_fast_config()

        if not enable_cuda_graph:
            config.enable_cuda_graph = False
            config.enable_jit_freeze = False

        if config.memory_format is not None:
            model.model.to(memory_format=config.memory_format)

        patch = StableFastPatch(model, config)
        model_stable_fast = model.clone()
        model_stable_fast.set_model_unet_function_wrapper(patch)
        return (model_stable_fast,)


================================================
FILE: requirements.txt
================================================
zstandard
onnx


================================================
FILE: tensorrt_node.py
================================================
import copy
import enum

import comfy.model_management
import comfy.model_patcher
import nodes
import torch

from .module.comfy_trace.model_base import (
    UNetModelModuleFactory,
)
from .module.comfy_trace.sd import VAEDecodeModule
from .module.controlnet_tensorrt import (
    CallableTensorRTEngineWrapperDynamicShapeControlNet,
)
from .module.openaimodel_tensorrt import (
    TENSORRT_CONTEXT_KEY,
    CallableTensorRTEngineWrapperDynamicShapeUNetModelForward,
    TensorRTEngineBlockContext,
    do_hook_forward_timestep_embed,
    undo_hook_forward_timestep_embed,
)
from .module.sd_tensorrt import CallableTensorRTEngineWrapperDynamicShapeVAEDecode
from .module.tensorrt_wrapper import TensorRTEngineConfig, TensorRTEngineContext


class BlockTensorRTPatch(torch.nn.Module):
    def __init__(self, config, model_config, model_sampling_type):
        super().__init__()
        self.model: torch.nn.Module = None
        self.model_config = model_config
        self.model_sampling_type = model_sampling_type
        self.config = config
        self.model_device = torch.device("cpu")
        self.tensorrt_module = None
        self.lowvram_model_memory = 0

    def __deepcopy__(self, memo=None):
        return self

    @property
    def dtype(self):
        return self.model.dtype

    def warmup(
        self,
        x,
        timesteps,
        context,
        y,
        control,
        transformer_options,
        **kwargs,
    ):
        warmup_input_x = torch.zeros(
            (
                self.config.keep_batch_size * 2,
                x.shape[1],
                int(self.config.keep_height / 8),
                int(self.config.keep_width / 8),
            ),
            device=x.device,
            dtype=x.dtype,
        )
        warmup_x = warmup_input_x
        warmup_timesteps = torch.ones(
            (self.config.keep_batch_size * 2,),
            device=timesteps.device,
            dtype=timesteps.dtype,
        )
        warmup_context = None
        if context is not None:
            warmup_context = torch.zeros(
                (
                    self.config.keep_batch_size * 2,
                    self.config.keep_embedding_block * 77,
                    context.shape[2],
                ),
                device=context.device,
                dtype=context.dtype,
            )
        warmup_y = None
        if y is not None:
            warmup_y = torch.zeros(
                (
                    self.config.keep_batch_size * 2,
                    y.shape[1],
                ),
                device=y.device,
                dtype=y.dtype,
            )

        self(
            warmup_x,
            warmup_timesteps,
            warmup_context,
            warmup_y,
            None,
            {},
            **kwargs,
        )

    def __call__(
        self,
        x,
        timesteps=None,
        context=None,
        y=None,
        control=None,
        transformer_options={},
        **kwargs,
    ):
        if self.tensorrt_module is None:
            self.tensorrt_module = TensorRTEngineBlockContext()
            self.tensorrt_module.tensorrt_context.keep_models.append(self.model)

            self.tensorrt_module.tensorrt_context.model_type = (
                self.model_config.__class__.__name__
            )
            self.tensorrt_module.tensorrt_context.unet_config = (
                self.model_config.unet_config
            )
            self.tensorrt_module.tensorrt_context.model_sampling_type = (
                self.model_sampling_type
            )
            self.tensorrt_module.tensorrt_context.cuda_stream = (
                torch.cuda.current_stream()
            )
            self.tensorrt_module.tensorrt_context.cuda_device = x.device
            
            self.warmup(
                x,
                timesteps,
                context,
                y,
                control,
                transformer_options,
                **kwargs,
            )

        transformer_options[TENSORRT_CONTEXT_KEY] = self.tensorrt_module

        do_hook_forward_timestep_embed()
        try:
            out = self.model(
                x,
                timesteps,
                context,
                y,
                control,
                transformer_options,
                **kwargs,
            )
        finally:
            undo_hook_forward_timestep_embed()
            transformer_options.pop(TENSORRT_CONTEXT_KEY)

        return out

    def to(self, device):
        if type(device) is torch.device:
            self.model_device = device
        return self


class UnetTensorRTPatch(BlockTensorRTPatch):
    def __init__(self, *args):
        super().__init__(*args)
        self.tensorrt_context = TensorRTEngineContext()

    def __call__(
        self,
        x,
        timesteps=None,
        context=None,
        y=None,
        control=None,
        transformer_options={},
        **kwargs,
    ):
        if self.tensorrt_module is None:
            devices = set((v.device for v in self.model.state_dict().values()))
            if torch.device("cpu") in devices and self.lowvram_model_memory > 0:
                self.tensorrt_context.enable_weight_streaming = True
                self.tensorrt_context.lowvram_model_memory = self.lowvram_model_memory

            self.tensorrt_context.model_type = self.model_config.__class__.__name__
            self.tensorrt_context.unet_config = self.model_config.unet_config
            self.tensorrt_context.model_sampling_type = self.model_sampling_type

            if self.tensorrt_context.cuda_stream is None:
                # self.tensorrt_context.cuda_stream = torch.cuda.current_stream()
                self.tensorrt_context.cuda_stream = torch.cuda.Stream(x.device)
                self.tensorrt_context.infer_cuda_stream_sync = True

            self.tensorrt_context.identify_weight_hash = (
                self.config.use_dedicated_engine
            )

            self.tensorrt_context.cuda_device = x.device
            # self.tensorrt_context.dtype = input_x.dtype

            self.tensorrt_module = (
                CallableTensorRTEngineWrapperDynamicShapeUNetModelForward(
                    self.tensorrt_context, ""
                )
            )
            if control is None:
                self.warmup(
                    x,
                    timesteps,
                    context,
                    y,
                    control,
                    transformer_options,
                    **kwargs,
                )

        module_factory = UNetModelModuleFactory(
            self.model,
            self.model_config,
            x=x,
            timesteps=timesteps,
            context=context,
            y=y,
            control=control,
            transformer_options=transformer_options,
            **kwargs,
        )

        with module_factory.converted_module_context() as (m_model, m_kwargs):
            out = self.tensorrt_module(m_model, **m_kwargs)

        return out


class ModelUnetFunctionWrapper:
    def __init__(self, patch):
        self.patch = patch

    def __deepcopy__(self, memo=None):
        return self

    def __call__(self, model_function, params):
        input_x = params.get("input")
        timestep_ = params.get("timestep")
        c = params.get("c")

        origin_diffusion_model = model_function.__self__.diffusion_model
        self.patch.model = origin_diffusion_model
        model_function.__self__.diffusion_model = self.patch
        try:
            out = model_function(input_x, timestep_, **c)
        finally:
            model_function.__self__.diffusion_mode
Download .txt
gitextract_h6p_bfmr/

├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── module/
│   ├── comfy_trace/
│   │   ├── model_base.py
│   │   ├── nodes_freelunch.py
│   │   ├── nodes_model_downscale.py
│   │   ├── openaimodel.py
│   │   └── sd.py
│   ├── comfy_trace_utilities.py
│   ├── controlnet_tensorrt.py
│   ├── model_base_tensorrt.py
│   ├── onnx_module_refit.py
│   ├── openaimodel_tensorrt.py
│   ├── patched_onnx_export/
│   │   └── utils_2_4_0.py
│   ├── sd_tensorrt.py
│   ├── sfast_pipeline_compiler.py
│   ├── tensorrt_utilities.py
│   └── tensorrt_wrapper.py
├── node.py
├── requirements.txt
├── tensorrt_node.py
└── tests/
    └── workflow.json
Download .txt
SYMBOL INDEX (262 symbols across 17 files)

FILE: module/comfy_trace/model_base.py
  class BaseModelApplyModelModule (line 21) | class BaseModelApplyModelModule(torch.nn.Module):
    method __init__ (line 22) | def __init__(self, func, module):
    method forward (line 27) | def forward(
  class BaseModelApplyModelModuleFactory (line 54) | class BaseModelApplyModelModuleFactory(ModuleFactory):
    method __init__ (line 64) | def __init__(self, callable, kwargs) -> None:
    method gen_converted_kwargs (line 72) | def gen_converted_kwargs(self):
    method gen_cache_key (line 107) | def gen_cache_key(self):
    method converted_module_context (line 132) | def converted_module_context(self):
  class UNetModelModule (line 153) | class UNetModelModule(torch.nn.Module):
    method __init__ (line 154) | def __init__(self, module):
    method forward (line 158) | def forward(
  class UNetModelModuleFactory (line 183) | class UNetModelModuleFactory(ModuleFactory):
    method __init__ (line 192) | def __init__(self, diffusion_model, unet_config, **kwargs) -> None:
    method gen_converted_kwargs (line 200) | def gen_converted_kwargs(self):
    method gen_cache_key (line 235) | def gen_cache_key(self):
    method converted_module_context (line 260) | def converted_module_context(self):

FILE: module/comfy_trace/nodes_freelunch.py
  function Fourier_filter (line 8) | def Fourier_filter(x, threshold: int, scale: float):
  class FreeU (line 29) | class FreeU(torch.nn.Module):
    method __init__ (line 30) | def __init__(self, scale_map):
    method forward (line 34) | def forward(self, h, hsp, parameter, transformer_options):
    method from_closure (line 42) | def from_closure(closure, transformer_options):
    method gen_cache_key (line 50) | def gen_cache_key(self):
  class FreeU_V2 (line 54) | class FreeU_V2(torch.nn.Module):
    method __init__ (line 55) | def __init__(self, scale_map):
    method forward (line 59) | def forward(self, h, hsp, parameter, transformer_options):
    method from_closure (line 79) | def from_closure(closure, transformer_options):
    method gen_cache_key (line 89) | def gen_cache_key(self):

FILE: module/comfy_trace/nodes_model_downscale.py
  class PatchModelAddDownscale_input_block_patch (line 5) | class PatchModelAddDownscale_input_block_patch(torch.nn.Module):
    method __init__ (line 6) | def __init__(
    method forward (line 23) | def forward(self, h, parameter, transformer_options):
    method from_closure (line 36) | def from_closure(closure, transformer_options):
    method gen_cache_key (line 54) | def gen_cache_key(self):
  class PatchModelAddDownscale_output_block_patch (line 67) | class PatchModelAddDownscale_output_block_patch(torch.nn.Module):
    method __init__ (line 68) | def __init__(self, upscale_method):
    method forward (line 72) | def forward(self, h, hsp, parameter, transformer_options):
    method from_closure (line 84) | def from_closure(closure, transformer_options):
    method gen_cache_key (line 93) | def gen_cache_key(self):

FILE: module/comfy_trace/openaimodel.py
  class ForwardTimestepEmbedModule (line 15) | class ForwardTimestepEmbedModule(th.nn.Module):
    method __init__ (line 16) | def __init__(self, ts, transformer_options={}, num_video_frames=None):
    method forward (line 22) | def forward(
  class PatchUNetModel (line 46) | class PatchUNetModel(UNetModel):
    method cast_from (line 48) | def cast_from(other):
    method cast_to_base_model (line 56) | def cast_to_base_model(self):
    method patch_init (line 61) | def patch_init(self):
    method patch_deinit (line 72) | def patch_deinit(self):
    method set_patch_module (line 77) | def set_patch_module(self, patch_module):
    method forward (line 102) | def forward(

FILE: module/comfy_trace/sd.py
  class VAEDecodeModule (line 4) | class VAEDecodeModule(torch.nn.Module):
    method __init__ (line 5) | def __init__(self, module, decode):
    method forward (line 10) | def forward(self, samples):

FILE: module/comfy_trace_utilities.py
  function hash_arg (line 7) | def hash_arg(arg):
  class ModuleWrapper (line 25) | class ModuleWrapper(torch.nn.Module):
    method __init__ (line 26) | def __init__(self, module):
    method forward (line 30) | def forward(self, *args, **kwargs):
  class ModuleFactory (line 34) | class ModuleFactory:
    method __init__ (line 35) | def __init__(self, callable, kwargs) -> None:
    method gen_converted_kwargs (line 40) | def gen_converted_kwargs(self):
    method get_converted_kwargs (line 43) | def get_converted_kwargs(self):
    method gen_cache_key (line 46) | def gen_cache_key(self):
    method converted_module_context (line 53) | def converted_module_context(self):
    method load_state_dict_to_module (line 56) | def load_state_dict_to_module(self, script_module):
  class TracerWithCache (line 64) | class TracerWithCache:
    method get_traced_module (line 68) | def get_traced_module(module_factory: ModuleFactory, device=None):

FILE: module/controlnet_tensorrt.py
  class CallableTensorRTEngineWrapperDynamicShapeControlNet (line 4) | class CallableTensorRTEngineWrapperDynamicShapeControlNet(
    method gen_onnx_args (line 9) | def gen_onnx_args(self, kwargs, module=None):
    method gen_tensorrt_args (line 27) | def gen_tensorrt_args(self, kwargs):
    method gen_tensorrt_args_profile (line 38) | def gen_tensorrt_args_profile(self, input_shape_info):
    method gen_onnx_outputs (line 60) | def gen_onnx_outputs(self, module):
    method gen_tensorrt_outputs (line 67) | def gen_tensorrt_outputs(self, output_map):

FILE: module/model_base_tensorrt.py
  class CallableTensorRTEngineWrapperDynamicShapeBaseModelApplyModel (line 6) | class CallableTensorRTEngineWrapperDynamicShapeBaseModelApplyModel(
    method gen_onnx_args (line 18) | def gen_onnx_args(self, kwargs, module=None):
    method gen_tensorrt_args (line 54) | def gen_tensorrt_args(self, kwargs):
    method gen_tensorrt_args_profile (line 73) | def gen_tensorrt_args_profile(self, input_shape_info):

FILE: module/onnx_module_refit.py
  class ParamsDictGenMapValue (line 13) | class ParamsDictGenMapValue:
  function make_module_onnx_tensor_gen_map_by_params_dict (line 18) | def make_module_onnx_tensor_gen_map_by_params_dict(
  function make_module_onnx_tensor_gen_map_by_onnx_model (line 64) | def make_module_onnx_tensor_gen_map_by_onnx_model(
  function make_params_dict_by_module (line 73) | def make_params_dict_by_module(
  function make_constant_params_dict_by_onnx_model (line 95) | def make_constant_params_dict_by_onnx_model(

FILE: module/openaimodel_tensorrt.py
  class TensorRTEngineBlockContext (line 21) | class TensorRTEngineBlockContext:
    method dump_input_profile_info (line 29) | def dump_input_profile_info(self):
  class CallableTensorRTEngineWrapperDynamicShapeForwardTimestep (line 36) | class CallableTensorRTEngineWrapperDynamicShapeForwardTimestep(
    method gen_onnx_args (line 48) | def gen_onnx_args(self, kwargs, module=None):
    method gen_tensorrt_args (line 66) | def gen_tensorrt_args(self, kwargs):
    method gen_tensorrt_args_profile (line 77) | def gen_tensorrt_args_profile(self, input_shape_info):
  function hook_forward_timestep_embed (line 100) | def hook_forward_timestep_embed(
  function do_hook_forward_timestep_embed (line 138) | def do_hook_forward_timestep_embed():
  function undo_hook_forward_timestep_embed (line 144) | def undo_hook_forward_timestep_embed():
  class CallableTensorRTEngineWrapperDynamicShapeUNetModelForward (line 150) | class CallableTensorRTEngineWrapperDynamicShapeUNetModelForward(
    method gen_onnx_args (line 161) | def gen_onnx_args(self, kwargs, module=None):
    method gen_tensorrt_args (line 197) | def gen_tensorrt_args(self, kwargs):
    method gen_tensorrt_args_profile (line 216) | def gen_tensorrt_args_profile(self, input_shape_info):

FILE: module/patched_onnx_export/utils_2_4_0.py
  function is_in_onnx_export (line 72) | def is_in_onnx_export() -> bool:
  function select_model_mode_for_export (line 84) | def select_model_mode_for_export(model, mode: _C_onnx.TrainingMode):
  function disable_apex_o2_state_dict_hook (line 133) | def disable_apex_o2_state_dict_hook(
  function setup_onnx_logging (line 167) | def setup_onnx_logging(verbose: bool):
  function exporter_context (line 180) | def exporter_context(model, mode: _C_onnx.TrainingMode, verbose: bool):
  function export (line 191) | def export(
  function _is_constant_tensor_list (line 573) | def _is_constant_tensor_list(node):
  function _split_tensor_list_constants (line 588) | def _split_tensor_list_constants(g, block):
  function _optimize_graph (line 611) | def _optimize_graph(
  function warn_on_static_input_change (line 754) | def warn_on_static_input_change(input_states):
  function _resolve_args_by_export_type (line 784) | def _resolve_args_by_export_type(arg_name, arg_value, operator_export_ty...
  function _decide_keep_init_as_input (line 801) | def _decide_keep_init_as_input(
  function _decide_add_node_names (line 845) | def _decide_add_node_names(add_node_names, operator_export_type):
  function _decide_constant_folding (line 852) | def _decide_constant_folding(do_constant_folding, operator_export_type, ...
  function _signature (line 871) | def _signature(model) -> inspect.Signature:
  function _decide_input_format (line 879) | def _decide_input_format(model, args):
  function _trace (line 918) | def _trace(func, args, operator_export_type, return_outs=False):
  function _trace_and_get_graph_from_model (line 939) | def _trace_and_get_graph_from_model(model, args):
  function _get_param_count_list (line 970) | def _get_param_count_list(method_graph, args_params):
  function _check_flatten_did_not_remove (line 983) | def _check_flatten_did_not_remove(original, jit_flattened):
  function _create_jit_graph (line 1008) | def _create_jit_graph(
  function _get_named_param_dict (line 1060) | def _get_named_param_dict(graph, params):
  function _get_example_outputs (line 1068) | def _get_example_outputs(model, args):
  function unpack_quantized_tensor (line 1093) | def unpack_quantized_tensor(value, cast_onnx_accepted=True):
  function _pre_trace_quant_model (line 1114) | def _pre_trace_quant_model(model, args):
  function _model_to_graph (line 1128) | def _model_to_graph(
  function export_to_pretty_string (line 1274) | def export_to_pretty_string(
  function unconvertible_ops (line 1351) | def unconvertible_ops(
  function _setup_trace_module_map (line 1420) | def _setup_trace_module_map(
  function _reset_trace_module_map (line 1498) | def _reset_trace_module_map():
  function _get_module_attributes (line 1504) | def _get_module_attributes(module):
  function _export (line 1529) | def _export(
  function _apply_friendly_debug_names (line 1752) | def _apply_friendly_debug_names(graph, params):
  function _set_input_and_output_names (line 1765) | def _set_input_and_output_names(graph, input_names, output_names):
  function _run_symbolic_method (line 1798) | def _run_symbolic_method(g, op_name, symbolic_fn, args):
  function _add_block (line 1824) | def _add_block(node: _C.Node) -> _C.Block:
  function _add_input_to_block (line 1829) | def _add_input_to_block(block: _C.Block):
  function _add_output_to_block (line 1834) | def _add_output_to_block(block: _C.Block, value: _C.Value) -> int:
  function _should_aten_fallback (line 1839) | def _should_aten_fallback(
  function _need_symbolic_context (line 1870) | def _need_symbolic_context(symbolic_fn: Callable) -> bool:
  function _symbolic_context_handler (line 1886) | def _symbolic_context_handler(symbolic_fn: Callable) -> Callable:
  function _get_aten_op_overload_name (line 1911) | def _get_aten_op_overload_name(n: _C.Node) -> str:
  function _run_symbolic_function (line 1920) | def _run_symbolic_function(
  function _verify_custom_op_name (line 2045) | def _verify_custom_op_name(symbolic_name: str):
  function register_custom_op_symbolic (line 2062) | def register_custom_op_symbolic(
  function unregister_custom_op_symbolic (line 2099) | def unregister_custom_op_symbolic(symbolic_name: str, opset_version: int):
  function _validate_dynamic_axes (line 2118) | def _validate_dynamic_axes(dynamic_axes, model, input_names, output_names):
  function model_signature (line 2162) | def model_signature(model: Union[torch.nn.Module, Callable]) -> inspect....

FILE: module/sd_tensorrt.py
  class CallableTensorRTEngineWrapperDynamicShapeVAEDecode (line 4) | class CallableTensorRTEngineWrapperDynamicShapeVAEDecode(CallableTensorR...
    method gen_onnx_args (line 9) | def gen_onnx_args(self, kwargs, module=None):
    method gen_tensorrt_args (line 24) | def gen_tensorrt_args(self, kwargs):
    method gen_tensorrt_args_profile (line 35) | def gen_tensorrt_args_profile(self, input_shape_info):

FILE: module/sfast_pipeline_compiler.py
  class TracedModuleCacheItem (line 20) | class TracedModuleCacheItem:
  class LazyTraceModule (line 26) | class LazyTraceModule:
    method __init__ (line 29) | def __init__(self, config=None, patch_id=None, **kwargs_) -> None:
    method ts_compiler (line 43) | def ts_compiler(
    method __call__ (line 60) | def __call__(self, model_function, /, **kwargs):
    method to_empty (line 109) | def to_empty(self):
  function build_lazy_trace_module (line 115) | def build_lazy_trace_module(config, device, patch_id):

FILE: module/tensorrt_utilities.py
  class TQDMProgressMonitor (line 67) | class TQDMProgressMonitor(trt.IProgressMonitor):
    method __init__ (line 68) | def __init__(self):
    method phase_start (line 74) | def phase_start(self, phase_name, parent_phase, num_steps):
    method phase_finish (line 100) | def phase_finish(self, phase_name):
    method step_complete (line 126) | def step_complete(self, phase_name, step):
  class Engine (line 138) | class Engine:
    method __init__ (line 139) | def __init__(self, engine_path, enable_cuda_graph=False):
    method __del__ (line 156) | def __del__(self):
    method refit_simple (line 162) | def refit_simple(self, onnx_model):
    method refit_from_dict (line 175) | def refit_from_dict(
    method build (line 232) | def build(
    method save_engine (line 340) | def save_engine(self):
    method load (line 345) | def load(self):
    method update_binding_set (line 356) | def update_binding_set(self):
    method offload (line 361) | def offload(self, offload_context_only=False):
    method is_weight_streaming_engine (line 386) | def is_weight_streaming_engine(self):
    method activate (line 389) | def activate(
    method get_device_memory_size (line 416) | def get_device_memory_size(self):
    method allocate_buffers (line 427) | def allocate_buffers(
    method release_buffers (line 462) | def release_buffers(self):
    method infer (line 465) | def infer(
    method set_static_dict_input (line 516) | def set_static_dict_input(self, feed_dict):
    method __str__ (line 524) | def __str__(self):

FILE: module/tensorrt_wrapper.py
  class TensorRTEngineConfig (line 36) | class TensorRTEngineConfig:
  class CallableTensorRTEngineWrapper (line 45) | class CallableTensorRTEngineWrapper:
    method __init__ (line 46) | def __init__(self, tensorrt_context, identification) -> None:
    method gen_onnx_args (line 63) | def gen_onnx_args(self, kwargs, module=None):
    method gen_onnx_outputs (line 73) | def gen_onnx_outputs(self, module):
    method gen_tensorrt_args (line 76) | def gen_tensorrt_args(self, kwargs):
    method gen_tensorrt_args_profile (line 86) | def gen_tensorrt_args_profile(self, input_shape_info):
    method gen_tensorrt_outputs (line 89) | def gen_tensorrt_outputs(self, output):
    method is_profile_compatible (line 92) | def is_profile_compatible(self, input_profile_info, input_shape_info):
    method __call__ (line 108) | def __call__(self, module: torch.nn.Module, /, **kwargs: Any) -> Any:
  class TensorRTEngineComfyModelPatcherWrapper (line 337) | class TensorRTEngineComfyModelPatcherWrapper(comfy.model_patcher.ModelPa...
    method patch_model_lowvram (line 338) | def patch_model_lowvram(self, device_to=None, *arg, **kwargs):
    method patch_model (line 341) | def patch_model(self, device_to=None, *arg, **kwargs):
    method unpatch_model (line 351) | def unpatch_model(self, device_to=None, *arg, **kwargs):
  function get_additional_keep_models (line 357) | def get_additional_keep_models():
  class TensorRTEngineContext (line 368) | class TensorRTEngineContext:
  function get_key_hash (line 395) | def get_key_hash(key):
  function get_cache_path (line 399) | def get_cache_path(key, dir_name):
  function get_engine_path (line 407) | def get_engine_path(key):
  function get_engine_with_cache (line 411) | def get_engine_with_cache(key):
  function gen_engine (line 418) | def gen_engine(key, onnx_model, input_profile, dtype, enable_weight_stre...
  function get_refit_info_cache (line 434) | def get_refit_info_cache(key):
  function gen_refit_info (line 441) | def gen_refit_info(key):
  class TorchTensorRTRefitInfo (line 446) | class TorchTensorRTRefitInfo:
    method __init__ (line 447) | def __init__(self, info_path) -> None:
    method save (line 452) | def save(self):
    method load (line 459) | def load(self):
  function sha256sum_state_dict (line 467) | def sha256sum_state_dict(state_dict: dict[str, torch.Tensor]):

FILE: node.py
  function is_cuda_malloc_async (line 7) | def is_cuda_malloc_async():
  function gen_stable_fast_config (line 11) | def gen_stable_fast_config():
  class StableFastPatch (line 40) | class StableFastPatch:
    method __init__ (line 41) | def __init__(self, model, config):
    method __deepcopy__ (line 46) | def __deepcopy__(self, memo=None):
    method __call__ (line 49) | def __call__(self, model_function, params):
    method to (line 69) | def to(self, device):
  class ApplyStableFastUnet (line 85) | class ApplyStableFastUnet:
    method INPUT_TYPES (line 87) | def INPUT_TYPES(s):
    method apply_stable_fast (line 100) | def apply_stable_fast(self, model, enable_cuda_graph):

FILE: tensorrt_node.py
  class BlockTensorRTPatch (line 27) | class BlockTensorRTPatch(torch.nn.Module):
    method __init__ (line 28) | def __init__(self, config, model_config, model_sampling_type):
    method __deepcopy__ (line 38) | def __deepcopy__(self, memo=None):
    method dtype (line 42) | def dtype(self):
    method warmup (line 45) | def warmup(
    method __call__ (line 103) | def __call__(
    method to (line 160) | def to(self, device):
  class UnetTensorRTPatch (line 166) | class UnetTensorRTPatch(BlockTensorRTPatch):
    method __init__ (line 167) | def __init__(self, *args):
    method __call__ (line 171) | def __call__(
  class ModelUnetFunctionWrapper (line 237) | class ModelUnetFunctionWrapper:
    method __init__ (line 238) | def __init__(self, patch):
    method __deepcopy__ (line 241) | def __deepcopy__(self, memo=None):
    method __call__ (line 244) | def __call__(self, model_function, params):
  function hook_memory_required (line 260) | def hook_memory_required(input_shape):
  class TensorRTEngineOriginModelPatcherWrapper_BlockPatch (line 264) | class TensorRTEngineOriginModelPatcherWrapper_BlockPatch(
    method cast_from (line 268) | def cast_from(other):
    method patch_init (line 275) | def patch_init(self, tensorrt_module_patch):
    method patch_deinit (line 278) | def patch_deinit(self):
    method cast_to_base_model (line 282) | def cast_to_base_model(self):
    method patch_model (line 287) | def patch_model(self, device_to=None, *arg, **kwargs):
    method __del__ (line 306) | def __del__(self):
  class TensorRTEngineOriginModelPatcherWrapper_UnetPatch (line 310) | class TensorRTEngineOriginModelPatcherWrapper_UnetPatch(
    method cast_from (line 314) | def cast_from(other):
    method patch_init (line 321) | def patch_init(self, tensorrt_module_patch):
    method patch_deinit (line 324) | def patch_deinit(self):
    method cast_to_base_model (line 328) | def cast_to_base_model(self):
    method model_size (line 333) | def model_size(self):
    method patch_model_lowvram (line 341) | def patch_model_lowvram(
    method patch_model (line 366) | def patch_model(self, device_to=None, *arg, **kwargs):
    method __del__ (line 374) | def __del__(self):
  class PatchType (line 378) | class PatchType(enum.Enum):
  class ApplyTensorRTUnet (line 383) | class ApplyTensorRTUnet:
    method INPUT_TYPES (line 385) | def INPUT_TYPES(s):
    method apply_tensorrt (line 410) | def apply_tensorrt(
  class VAEDecodeTensorRTPatch (line 442) | class VAEDecodeTensorRTPatch:
    method __init__ (line 443) | def __init__(self, model, config):
    method warmup (line 450) | def warmup(self, samples_in):
    method __call__ (line 464) | def __call__(self, samples_in):
  class ApplyTensorRTVaeDecoder (line 494) | class ApplyTensorRTVaeDecoder:
    method INPUT_TYPES (line 496) | def INPUT_TYPES(s):
    method apply_tensorrt (line 517) | def apply_tensorrt(
  class ControlNetTensorRTPatch (line 537) | class ControlNetTensorRTPatch:
    method __init__ (line 538) | def __init__(self, control_model, config):
    method state_dict (line 545) | def state_dict(self):
    method to (line 548) | def to(self, device):
    method warmup (line 551) | def warmup(self, x, hint, timesteps, context, y=None):
    method __call__ (line 589) | def __call__(self, x, hint, timesteps, context, y=None):
  class ApplyTensorRTControlNet (line 610) | class ApplyTensorRTControlNet:
    method INPUT_TYPES (line 612) | def INPUT_TYPES(s):
    method apply_tensorrt (line 634) | def apply_tensorrt(
Condensed preview — 23 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (228K chars).
[
  {
    "path": ".gitignore",
    "chars": 3118,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "LICENSE",
    "chars": 1064,
    "preview": "MIT License\n\nCopyright (c) 2024 gameltb\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof"
  },
  {
    "path": "README.md",
    "chars": 8344,
    "preview": "# ComfyUI_stable_fast\n\nExperimental usage of [stable-fast](https://github.com/chengzeyi/stable-fast) and TensorRT.\n\n> [!"
  },
  {
    "path": "__init__.py",
    "chars": 1569,
    "preview": "import traceback \nimport sys \n\nNODE_CLASS_MAPPINGS = {}\n\n# A dictionary that contains the friendly/humanly readable titl"
  },
  {
    "path": "module/comfy_trace/model_base.py",
    "chars": 9283,
    "preview": "import contextlib\n\nimport torch\n\nfrom ..comfy_trace_utilities import ModuleFactory, hash_arg\nfrom .nodes_freelunch impor"
  },
  {
    "path": "module/comfy_trace/nodes_freelunch.py",
    "chars": 3103,
    "preview": "# code originally taken from: https://github.com/ChenyangSi/FreeU (under MIT License)\n\nimport copy\n\nimport torch\n\n\ndef F"
  },
  {
    "path": "module/comfy_trace/nodes_model_downscale.py",
    "chars": 3061,
    "preview": "import comfy.utils\nimport torch\n\n\nclass PatchModelAddDownscale_input_block_patch(torch.nn.Module):\n    def __init__(\n   "
  },
  {
    "path": "module/comfy_trace/openaimodel.py",
    "chars": 7488,
    "preview": "import copy\n\nimport torch as th\nimport torch.nn as nn\nfrom comfy.ldm.modules.diffusionmodules.openaimodel import (\n    U"
  },
  {
    "path": "module/comfy_trace/sd.py",
    "chars": 249,
    "preview": "import torch\n\n\nclass VAEDecodeModule(torch.nn.Module):\n    def __init__(self, module, decode):\n        super().__init__("
  },
  {
    "path": "module/comfy_trace_utilities.py",
    "chars": 2590,
    "preview": "import contextlib\nimport copy\n\nimport torch\n\n\ndef hash_arg(arg):\n    # micro optimization: bool obj is an instance of in"
  },
  {
    "path": "module/controlnet_tensorrt.py",
    "chars": 2418,
    "preview": "from .tensorrt_wrapper import CallableTensorRTEngineWrapper\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeControlNet("
  },
  {
    "path": "module/model_base_tensorrt.py",
    "chars": 3605,
    "preview": "import torch\n\nfrom .tensorrt_wrapper import CallableTensorRTEngineWrapper\n\n\nclass CallableTensorRTEngineWrapperDynamicSh"
  },
  {
    "path": "module/onnx_module_refit.py",
    "chars": 3680,
    "preview": "import logging\nfrom collections import OrderedDict\nfrom dataclasses import asdict, dataclass\n\nimport onnx\nimport torch\nf"
  },
  {
    "path": "module/openaimodel_tensorrt.py",
    "chars": 8179,
    "preview": "from dataclasses import dataclass, field\nfrom typing import Dict\n\nimport comfy.ldm.modules.diffusionmodules.openaimodel\n"
  },
  {
    "path": "module/patched_onnx_export/utils_2_4_0.py",
    "chars": 84926,
    "preview": "# mypy: allow-untyped-defs\n\"\"\"Functions to export models into the ONNX IR format.\n\nThese models can be loaded with the O"
  },
  {
    "path": "module/sd_tensorrt.py",
    "chars": 1728,
    "preview": "from .tensorrt_wrapper import CallableTensorRTEngineWrapper\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeVAEDecode(C"
  },
  {
    "path": "module/sfast_pipeline_compiler.py",
    "chars": 4321,
    "preview": "import functools\nimport logging\nfrom dataclasses import dataclass\n\nimport torch\nfrom sfast.compilers.diffusion_pipeline_"
  },
  {
    "path": "module/tensorrt_utilities.py",
    "chars": 19475,
    "preview": "#\n# Copyright 2022 The HuggingFace Inc. team.\n# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFF"
  },
  {
    "path": "module/tensorrt_wrapper.py",
    "chars": 17439,
    "preview": "import gc\nimport hashlib\nimport json\nimport logging\nimport os\nimport tempfile\nimport time\nfrom dataclasses import datacl"
  },
  {
    "path": "node.py",
    "chars": 3763,
    "preview": "import torch\nfrom sfast.compilers.diffusion_pipeline_compiler import CompilationConfig\n\nfrom .module.sfast_pipeline_comp"
  },
  {
    "path": "requirements.txt",
    "chars": 15,
    "preview": "zstandard\nonnx\n"
  },
  {
    "path": "tensorrt_node.py",
    "chars": 20433,
    "preview": "import copy\nimport enum\n\nimport comfy.model_management\nimport comfy.model_patcher\nimport nodes\nimport torch\n\nfrom .modul"
  },
  {
    "path": "tests/workflow.json",
    "chars": 9006,
    "preview": "{\n  \"last_node_id\": 16,\n  \"last_link_id\": 31,\n  \"nodes\": [\n    {\n      \"id\": 7,\n      \"type\": \"CLIPTextEncode\",\n      \"p"
  }
]

About this extraction

This page contains the full source code of the gameltb/ComfyUI_stable_fast GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 23 files (213.7 KB), approximately 48.9k tokens, and a symbol index with 262 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!