[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# poetry\n#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control\n#poetry.lock\n\n# pdm\n#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.\n#pdm.lock\n#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it\n#   in version control.\n#   https://pdm.fming.dev/#use-with-ide\n.pdm.toml\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# PyCharm\n#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can\n#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore\n#  and can be added to the global gitignore or merged into this file.  For a more nuclear\n#  option (not recommended) you can uncomment the following to ignore the entire idea folder.\n#.idea/\nrepo/\ntensorrt_engine_cache/\nrefit_info/"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2024 gameltb\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# ComfyUI_stable_fast\n\nExperimental usage of [stable-fast](https://github.com/chengzeyi/stable-fast) and TensorRT.\n\n> [!NOTE]\n>\n> Official TensorRT node https://github.com/comfyanonymous/ComfyUI_TensorRT  \n> This repo is still experimental, just want to try TensorRT that doesn't need to be compiled repeatedly.\n\n[Speed Test](#speed-test)\n\n# Update\n\n- 2024-07-31 : Unfortunately, using the same engine on different models will result in a slight variation in the results or complete unusability. Added an option to allow building dedicated engines for different models. However, some models still have different outputs than PyTorch.\n- 2024-07-29 : significantly improved performance of starting and switching TensorRT models when there is an engine cache on PyTorch 2.4.0. add WEIGHT_STREAMING support, you can run SDXL on 6GB device with TensorRT. However, the engine unloading caused by VAE decoding can greatly slow down the overall generation speed.\n\n# Installation\n\n```bash\ngit clone https://github.com/gameltb/ComfyUI_stable_fast custom_nodes/ComfyUI_stable_fast\n```\n\n## stable-fast\n\nYou'll need to follow the guide below to enable stable fast node.\n\n[stable-fast installation](https://github.com/chengzeyi/stable-fast?tab=readme-ov-file#installation)\n\n> [!NOTE]\n>\n> Requires stable-fast >= 1.0.0 .\n\n## TensorRT(testing)\n\n> [!NOTE]\n>\n> Currently only tested on linux, Not tested on Windows.\n\nThe following needs to be installed when you use TensorRT.\n\n```bash\npip install onnx zstandard onnxscript --upgrade\npip install --pre --upgrade --extra-index-url https://pypi.nvidia.com tensorrt==10.2.0\npip install onnx-graphsurgeon polygraphy --extra-index-url https://pypi.ngc.nvidia.com\n```\n\n## Usage\n\nPlease refer to the [screenshot](#screenshot)\n\n## stable-fast\n\nIt can work with Lora, ControlNet and lcm. SD1.5 and SSD-1B are supported. SDXL should work.  \nRun ComfyUI with `--disable-cuda-malloc` may be possible to optimize the speed further.\n\n> [!NOTE]\n>\n> - FreeU and PatchModelAddDownscale are now supported experimentally, Just use the comfy node normally.\n> - stable fast not work well with accelerate, So this node has no effect when the vram is low. For example: 6G vram card run SDXL.\n> - stable fast will optimize the speed when generating images using the same model for the second time. if you switch models or Lora frequently, please consider disable enable_cuda_graph.\n> - **It is better to connect the `Apply StableFast Unet` node directly to the `KSampler` node, and there should be no nodes between them that will change the weight, such as the `Load LoRA` node, but for some nodes, placing it between them can prevent useless recompilation caused by modifying the node parameters, such as the `FreeU` node, you can try to use other nodes, but I can't guarantee that it will work properly.**\n\n## TensorRT\n\nRun ComfyUI with `--disable-xformers --force-fp16 --fp16-vae` and use `Apply TensorRT Unet` like `Apply StableFast Unet`.  \nThe Engine will be cached in `tensorrt_engine_cache`.\n\n> [!NOTE]\n>\n> - If you encounter an error after updating, you can try deleting the `tensorrt_engine_cache`.\n\n### Apply TensorRT Unet Node\n\n- enable_cuda_graph\n  - With or without CUDA Graph, this should make it slightly faster, but at the moment there is a problem with the implementation and this has no effect. Also, even if it works, it won't work with WEIGHT_STREAMING.\n- patch_type\n  - `UNET` compiles the whole unet as a model, and it's faster. However, some nodes are unusable because TensorRT does not support some operations in PyTorch, such as FreeU nodes. Also, if you don't have enough video memory to put down the entire model, you'll need to select this option to use TensorRT, otherwise it's likely to be slower than running directly.\n  - `UNET_BLOCK` splits unet into several small models to allow pytorch to perform operations between them that TensorRT does not support. It takes quite a bit of time to compile and load, but the speed of completion is not much compared to `UNET`. It may not be acceptable to use this option most of the time.\n- keep_width\n- keep_height\n- keep_batch_size\n- keep_embedding_block\n  - The parameters starting with `keep_` above are used when building the engine, and they specify the maximum value of the parameters that the engine accepts. At the same time, the node will look up the cached engine based on these values, so if you want to build the engine as few times as possible, keep a fixed set of values based on different types of models such as sd15 or sdxl. If one of the parameters you use is greater than them, it will trigger the build. embedding_block is related to the length of your prompt, and the longer the length, the greater the value.\n- use_dedicated_engine\n  - building dedicated engines for different models.\n\nWhen you use ControlNet, different control image sizes will cause the engine to compile for now.\n\n# Table\n\n## Features\n\n|                  | Stable Fast           | TensorRT(UNET) | TensorRT(UNET_BLOCK) |\n| ---------------- | --------------------- | -------------- | -------------------- |\n| SD1.5            | &check;               | &check;        | &check;              |\n| SDXL             | untested(Should work) | &check;        | untested             |\n| SSD-1B           | &check;               | &check;        | &check;              |\n| Lora             | &check;               | &check;        | &check;              |\n| ControlNet Unet  | &check;               | &check;        | &check;              |\n| VAE decode       | WIP                   | &check;        | -                    |\n| ControlNet Model | WIP                   | WIP            | -                    |\n\n## Nodes Tested\n\n|                        | Stable Fast | TensorRT(UNET) | TensorRT(UNET_BLOCK) |\n| ---------------------- | ----------- | -------------- | -------------------- |\n| Load LoRA              | &check;     | &check;        | &check;              |\n| FreeU(FreeU_V2)        | &check;     | &cross;        | &check;              |\n| PatchModelAddDownscale | &check;     | WIP            | &check;              |\n\n## Speed Test\n\n### GeForce RTX 3060 Mobile\n\nGeForce RTX 3060 Mobile (80W) 6GB, Linux , torch 2.1.1, stable fast 0.0.14, tensorrt 9.2.0.post12.dev5, xformers 0.0.23.  \n[workflow](./tests/workflow.json): SD1.5, 512x512 bantch_size 1, euler_ancestral karras, 20 steps, use fp16.\n\nTest Stable Fast and xformers run ComfyUI with `--disable-cuda-malloc`.  \nTest TensorRT and pytorch run ComfyUI with `--disable-xformers`.\n\n###### TensorRT Note\n\nFor the TensorRT first launch, it will take up to 10 minutes to build the engine; with timing cache, it will reduce to about 2–3 minutes; with engine cache, it will reduce to about 20–30 seconds for now.\n\n#### Avg it/s\n\n|                                  | Stable Fast (enable_cuda_graph) | TensorRT (UNET) | TensorRT (UNET_BLOCK) | pytorch cross attention | xformers |\n| -------------------------------- | ------------------------------- | --------------- | --------------------- | ----------------------- | -------- |\n|                                  | 10.10 it/s                      | 10.95it/s       | 10.66it/s             | 7.02it/s                | 7.90it/s |\n| enable FreeU                     | 9.42 it/s                       | &cross;         | 10.04it/s             | 6.75it/s                | 7.54it/s |\n| enable Patch Model Add Downscale | 10.81 it/s                      | &cross;         | 11.30it/s             | 7.46it/s                | 8.41it/s |\n\n#### Avg time spent\n\n| workflow                         | Stable Fast (enable_cuda_graph) | TensorRT (UNET) | TensorRT (UNET_BLOCK) | pytorch cross attention | xformers |\n| -------------------------------- | ------------------------------- | --------------- | --------------------- | ----------------------- | -------- |\n|                                  | 2.21s (first 17s)               | 2.05s           | 2.10s                 | 3.06s                   | 2.76s    |\n| enable FreeU                     | 2.35s (first 18.5s)             | &cross;         | 2.24s                 | 3.18s                   | 2.88     |\n| enable Patch Model Add Downscale | 2.08s (first 31.37s)            | &cross;         | 2.03s                 | 2.89s                   | 2.61s    |\n\n# Screenshot\n\n![sd1.5](asset/scr.png)\n![ssd-1b](asset/scr1.png)\n"
  },
  {
    "path": "__init__.py",
    "content": "import traceback \nimport sys \n\nNODE_CLASS_MAPPINGS = {}\n\n# A dictionary that contains the friendly/humanly readable titles for the nodes\nNODE_DISPLAY_NAME_MAPPINGS = {}\n\ntry:\n    from .node import ApplyStableFastUnet\n\n    SF_NODE_CLASS_MAPPINGS = {\n        \"ApplyStableFastUnet\": ApplyStableFastUnet,\n    }\n\n    SF_NODE_DISPLAY_NAME_MAPPINGS = {\n        \"ApplyStableFastUnet\": \"Apply StableFast Unet\",\n    }\n    NODE_CLASS_MAPPINGS.update(SF_NODE_CLASS_MAPPINGS)\n    NODE_DISPLAY_NAME_MAPPINGS.update(SF_NODE_DISPLAY_NAME_MAPPINGS)\nexcept Exception as e:\n    print(\"ComfyUI_stable_fast: StableFast node import failed.\")\n    traceback.print_exception(*sys.exc_info()) \n\ntry:\n    from .tensorrt_node import (\n        ApplyTensorRTControlNet,\n        ApplyTensorRTUnet,\n        ApplyTensorRTVaeDecoder,\n    )\n\n    TRT_NODE_CLASS_MAPPINGS = {\n        \"ApplyTensorRTUnet\": ApplyTensorRTUnet,\n        \"ApplyTensorRTVaeDecoder\": ApplyTensorRTVaeDecoder,\n        \"ApplyTensorRTControlNet\": ApplyTensorRTControlNet,\n    }\n    TRT_NODE_DISPLAY_NAME_MAPPINGS = {\n        \"ApplyTensorRTUnet\": \"Apply TensorRT Unet\",\n        \"ApplyTensorRTVaeDecoder\": \"Apply TensorRT VaeDecoder\",\n        \"ApplyTensorRTControlNet\": \"Apply TensorRT ControlNet\",\n    }\n    NODE_CLASS_MAPPINGS.update(TRT_NODE_CLASS_MAPPINGS)\n    NODE_DISPLAY_NAME_MAPPINGS.update(TRT_NODE_DISPLAY_NAME_MAPPINGS)\nexcept Exception as e:\n    print(\"ComfyUI_stable_fast: tensorrt_node import failed.\")\n    traceback.print_exception(*sys.exc_info()) \n\nif len(NODE_CLASS_MAPPINGS) == 0:\n    raise Exception(\"import failed\")"
  },
  {
    "path": "module/comfy_trace/model_base.py",
    "content": "import contextlib\n\nimport torch\n\nfrom ..comfy_trace_utilities import ModuleFactory, hash_arg\nfrom .nodes_freelunch import FreeU, FreeU_V2\nfrom .nodes_model_downscale import (\n    PatchModelAddDownscale_input_block_patch,\n    PatchModelAddDownscale_output_block_patch,\n)\nfrom .openaimodel import PatchUNetModel\n\nPATCH_PATCH_MAP = {\n    \"FreeU.patch.<locals>.output_block_patch\": FreeU,\n    \"FreeU_V2.patch.<locals>.output_block_patch\": FreeU_V2,\n    \"PatchModelAddDownscale.patch.<locals>.input_block_patch\": PatchModelAddDownscale_input_block_patch,\n    \"PatchModelAddDownscale.patch.<locals>.output_block_patch\": PatchModelAddDownscale_output_block_patch,\n}\n\n\nclass BaseModelApplyModelModule(torch.nn.Module):\n    def __init__(self, func, module):\n        super().__init__()\n        self.func = func\n        self.module = module\n\n    def forward(\n        self,\n        input_x,\n        timestep,\n        c_concat=None,\n        c_crossattn=None,\n        y=None,\n        control=None,\n        transformer_options={},\n    ):\n        kwargs = {\"y\": y}\n\n        new_transformer_options = {}\n        if \"patches\" in transformer_options:\n            new_transformer_options[\"patches\"] = transformer_options[\"patches\"]\n\n        return self.func(\n            input_x,\n            timestep,\n            c_concat=c_concat,\n            c_crossattn=c_crossattn,\n            control=control,\n            transformer_options=new_transformer_options,\n            **kwargs,\n        )\n\n\nclass BaseModelApplyModelModuleFactory(ModuleFactory):\n    kwargs_name = (\n        \"input_x\",\n        \"timestep\",\n        \"c_concat\",\n        \"c_crossattn\",\n        \"y\",\n        \"control\",\n    )\n\n    def __init__(self, callable, kwargs) -> None:\n        self.callable = callable\n        self.unet_config = callable.__self__.model_config.unet_config\n        self.kwargs = kwargs\n        self.patch_module = {}\n        self.patch_module_parameter = {}\n        self.converted_kwargs = self.gen_converted_kwargs()\n\n    def gen_converted_kwargs(self):\n        converted_kwargs = {}\n        for arg_name, arg in self.kwargs.items():\n            if arg_name in self.kwargs_name:\n                converted_kwargs[arg_name] = arg\n\n        transformer_options = self.kwargs.get(\"transformer_options\", {})\n        patches = transformer_options.get(\"patches\", {})\n\n        patch_module = {}\n        patch_module_parameter = {}\n\n        for patch_type_name, patch_list in patches.items():\n            patch_module[patch_type_name] = []\n            patch_module_parameter[patch_type_name] = []\n            for patch in patch_list:\n                if patch.__qualname__ in PATCH_PATCH_MAP:\n                    patch, parameter = PATCH_PATCH_MAP[patch.__qualname__].from_closure(\n                        patch, transformer_options\n                    )\n                    patch_module[patch_type_name].append(patch)\n                    patch_module_parameter[patch_type_name].append(parameter)\n                    # output_block_patch_module.append(torch.jit.script(patch))\n                else:\n                    print(f\"\\33[93mWarning: Ignore patch {patch.__qualname__}.\\33[0m\")\n\n        new_transformer_options = {}\n        new_transformer_options[\"patches\"] = patch_module_parameter\n        if len(new_transformer_options[\"patches\"]) > 0:\n            converted_kwargs[\"transformer_options\"] = new_transformer_options\n\n        self.patch_module = patch_module\n        self.patch_module_parameter = patch_module_parameter\n        return converted_kwargs\n\n    def gen_cache_key(self):\n        key_kwargs = {}\n        for k, v in self.converted_kwargs.items():\n            if k == \"transformer_options\":\n                nv = {}\n                for tk, tv in v.items():\n                    if tk not in (\"patches\",):  # ,\"cond_or_uncond\"\n                        nv[tk] = tv\n                v = nv\n            key_kwargs[k] = v\n\n        patch_module_cache_key = {}\n        for patch_type_name, patch_list in self.patch_module.items():\n            patch_module_cache_key[patch_type_name] = []\n            for patch in patch_list:\n                patch_module_cache_key[patch_type_name].append(patch.gen_cache_key())\n\n        return (\n            self.callable.__class__.__qualname__,\n            hash_arg(self.unet_config),\n            hash_arg(key_kwargs),\n            hash_arg(patch_module_cache_key),\n        )\n\n    @contextlib.contextmanager\n    def converted_module_context(self):\n        module = BaseModelApplyModelModule(self.callable, self.callable.__self__)\n\n        if len(self.patch_module) > 0:\n            self.callable.__self__.diffusion_model = PatchUNetModel.cast_from(\n                self.callable.__self__.diffusion_model\n            )\n            try:\n                self.callable.__self__.diffusion_model.set_patch_module(\n                    self.patch_module\n                )\n\n                yield (module, self.converted_kwargs)\n            finally:\n                self.callable.__self__.diffusion_model = (\n                    self.callable.__self__.diffusion_model.cast_to_base_model()\n                )\n        else:\n            yield (module, self.converted_kwargs)\n\n\nclass UNetModelModule(torch.nn.Module):\n    def __init__(self, module):\n        super().__init__()\n        self.module = module\n\n    def forward(\n        self,\n        x,\n        timesteps=None,\n        context=None,\n        y=None,\n        control=None,\n        transformer_options={},\n        **kwargs,\n    ):\n        new_transformer_options = {}\n        if \"patches\" in transformer_options:\n            new_transformer_options[\"patches\"] = transformer_options[\"patches\"]\n\n        return self.module(\n            x,\n            timesteps=timesteps,\n            context=context,\n            y=y,\n            control=control,\n            transformer_options=new_transformer_options,\n            **kwargs,\n        )\n\n\nclass UNetModelModuleFactory(ModuleFactory):\n    kwargs_name = (\n        \"x\",\n        \"timesteps\",\n        \"context\",\n        \"y\",\n        \"control\",\n    )\n\n    def __init__(self, diffusion_model, unet_config, **kwargs) -> None:\n        self.diffusion_model = diffusion_model\n        self.unet_config = unet_config\n        self.kwargs = kwargs\n        self.patch_module = {}\n        self.patch_module_parameter = {}\n        self.converted_kwargs = self.gen_converted_kwargs()\n\n    def gen_converted_kwargs(self):\n        converted_kwargs = {}\n        for arg_name, arg in self.kwargs.items():\n            if arg_name in self.kwargs_name:\n                converted_kwargs[arg_name] = arg\n\n        transformer_options = self.kwargs.get(\"transformer_options\", {})\n        patches = transformer_options.get(\"patches\", {})\n\n        patch_module = {}\n        patch_module_parameter = {}\n\n        for patch_type_name, patch_list in patches.items():\n            patch_module[patch_type_name] = []\n            patch_module_parameter[patch_type_name] = []\n            for patch in patch_list:\n                if patch.__qualname__ in PATCH_PATCH_MAP:\n                    patch, parameter = PATCH_PATCH_MAP[patch.__qualname__].from_closure(\n                        patch, transformer_options\n                    )\n                    patch_module[patch_type_name].append(patch)\n                    patch_module_parameter[patch_type_name].append(parameter)\n                    # output_block_patch_module.append(torch.jit.script(patch))\n                else:\n                    print(f\"\\33[93mWarning: Ignore patch {patch.__qualname__}.\\33[0m\")\n\n        new_transformer_options = {}\n        new_transformer_options[\"patches\"] = patch_module_parameter\n        if len(new_transformer_options[\"patches\"]) > 0:\n            converted_kwargs[\"transformer_options\"] = new_transformer_options\n\n        self.patch_module = patch_module\n        self.patch_module_parameter = patch_module_parameter\n        return converted_kwargs\n\n    def gen_cache_key(self):\n        key_kwargs = {}\n        for k, v in self.converted_kwargs.items():\n            if k == \"transformer_options\":\n                nv = {}\n                for tk, tv in v.items():\n                    if tk not in (\"patches\",):  # ,\"cond_or_uncond\"\n                        nv[tk] = tv\n                v = nv\n            key_kwargs[k] = v\n\n        patch_module_cache_key = {}\n        for patch_type_name, patch_list in self.patch_module.items():\n            patch_module_cache_key[patch_type_name] = []\n            for patch in patch_list:\n                patch_module_cache_key[patch_type_name].append(patch.gen_cache_key())\n\n        return (\n            self.diffusion_model.__class__.__qualname__,\n            hash_arg(self.unet_config),\n            hash_arg(key_kwargs),\n            hash_arg(patch_module_cache_key),\n        )\n\n    @contextlib.contextmanager\n    def converted_module_context(self):\n        module = UNetModelModule(self.diffusion_model)\n\n        if len(self.patch_module) > 0:\n            diffusion_model = PatchUNetModel.cast_from(self.diffusion_model)\n            try:\n                diffusion_model.set_patch_module(self.patch_module)\n\n                yield (module, self.converted_kwargs)\n            finally:\n                diffusion_model = diffusion_model.cast_to_base_model()\n        else:\n            yield (module, self.converted_kwargs)\n"
  },
  {
    "path": "module/comfy_trace/nodes_freelunch.py",
    "content": "# code originally taken from: https://github.com/ChenyangSi/FreeU (under MIT License)\n\nimport copy\n\nimport torch\n\n\ndef Fourier_filter(x, threshold: int, scale: float):\n    # FFT\n    x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))\n    x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))\n\n    B, C, H, W = x_freq.shape\n    mask = torch.ones((B, C, H, W), device=x.device)\n\n    crow, ccol = H // 2, W // 2\n    mask[\n        ..., crow - threshold : crow + threshold, ccol - threshold : ccol + threshold\n    ] = scale\n    x_freq = x_freq * mask\n\n    # IFFT\n    x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))\n    x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real\n\n    return x_filtered.to(x.dtype)\n\n\nclass FreeU(torch.nn.Module):\n    def __init__(self, scale_map):\n        super().__init__()\n        self.scale_map = scale_map\n\n    def forward(self, h, hsp, parameter, transformer_options):\n        for k, scale in zip(self.scale_map, parameter):\n            if k == h.shape[1]:\n                h[:, : h.shape[1] // 2] = h[:, : h.shape[1] // 2] * scale[0]\n                hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])\n        return h, hsp\n\n    @staticmethod\n    def from_closure(closure, transformer_options):\n        scale_dict = {}\n        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):\n            if var_name == \"scale_dict\":\n                scale_dict = copy.deepcopy(var.cell_contents)\n                break\n        return FreeU(list(scale_dict.keys())), torch.Tensor(list(scale_dict.values()))\n\n    def gen_cache_key(self):\n        return [self.__class__.__name__, self.scale_map]\n\n\nclass FreeU_V2(torch.nn.Module):\n    def __init__(self, scale_map):\n        super().__init__()\n        self.scale_map = scale_map\n\n    def forward(self, h, hsp, parameter, transformer_options):\n        for k, scale in zip(self.scale_map, parameter):\n            if k == h.shape[1]:\n                hidden_mean = h.mean(1).unsqueeze(1)\n                B = hidden_mean.shape[0]\n                hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)\n                hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)\n                hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (\n                    hidden_max - hidden_min\n                ).unsqueeze(2).unsqueeze(3)\n\n                h[:, : h.shape[1] // 2] = h[:, : h.shape[1] // 2] * (\n                    (scale[0] - 1) * hidden_mean + 1\n                )\n\n                hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])\n\n        return h, hsp\n\n    @staticmethod\n    def from_closure(closure, transformer_options):\n        scale_dict = {}\n        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):\n            if var_name == \"scale_dict\":\n                scale_dict = copy.deepcopy(var.cell_contents)\n                break\n        return FreeU_V2(list(scale_dict.keys())), torch.Tensor(\n            list(scale_dict.values())\n        )\n\n    def gen_cache_key(self):\n        return [self.__class__.__name__, self.scale_map]\n"
  },
  {
    "path": "module/comfy_trace/nodes_model_downscale.py",
    "content": "import comfy.utils\nimport torch\n\n\nclass PatchModelAddDownscale_input_block_patch(torch.nn.Module):\n    def __init__(\n        self,\n        block_number,\n        downscale_method,\n        downscale_factor,\n        sigma,\n        sigma_start,\n        sigma_end,\n    ):\n        super().__init__()\n        self.block_number = block_number\n        self.downscale_method = downscale_method\n        self.downscale_factor = downscale_factor\n        self.sigma = sigma\n        self.sigma_start = sigma_start\n        self.sigma_end = sigma_end\n\n    def forward(self, h, parameter, transformer_options):\n        if transformer_options[\"block\"][1] == self.block_number:\n            if self.sigma <= self.sigma_start and self.sigma >= self.sigma_end:\n                h = comfy.utils.common_upscale(\n                    h,\n                    round(int(h.shape[-1]) * (1.0 / self.downscale_factor)),\n                    round(int(h.shape[-2]) * (1.0 / self.downscale_factor)),\n                    self.downscale_method,\n                    \"disabled\",\n                )\n        return h\n\n    @staticmethod\n    def from_closure(closure, transformer_options):\n        parameter_dict = {}\n        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):\n            parameter_dict[var_name] = var.cell_contents\n\n        sigma = transformer_options[\"sigmas\"][0].item()\n        return (\n            PatchModelAddDownscale_input_block_patch(\n                parameter_dict[\"block_number\"],\n                parameter_dict[\"downscale_method\"],\n                parameter_dict[\"downscale_factor\"],\n                sigma,\n                parameter_dict[\"sigma_start\"],\n                parameter_dict[\"sigma_end\"],\n            ),\n            (),\n        )\n\n    def gen_cache_key(self):\n        flag = 0\n        if self.sigma <= self.sigma_start and self.sigma >= self.sigma_end:\n            flag = 1\n        return [\n            self.__class__.__name__,\n            flag,\n            self.block_number,\n            self.downscale_method,\n            self.downscale_factor,\n        ]\n\n\nclass PatchModelAddDownscale_output_block_patch(torch.nn.Module):\n    def __init__(self, upscale_method):\n        super().__init__()\n        self.upscale_method = upscale_method\n\n    def forward(self, h, hsp, parameter, transformer_options):\n        if h.shape[2] != hsp.shape[2]:\n            h = comfy.utils.common_upscale(\n                h,\n                int(hsp.shape[-1]),\n                int(hsp.shape[-2]),\n                self.upscale_method,\n                \"disabled\",\n            )\n        return h, hsp\n\n    @staticmethod\n    def from_closure(closure, transformer_options):\n        parameter_dict = {}\n        for var_name, var in zip(closure.__code__.co_freevars, closure.__closure__):\n            parameter_dict[var_name] = var.cell_contents\n        return (\n            PatchModelAddDownscale_output_block_patch(parameter_dict[\"upscale_method\"]),\n            (),\n        )\n\n    def gen_cache_key(self):\n        return [self.__class__.__name__, self.upscale_method]\n"
  },
  {
    "path": "module/comfy_trace/openaimodel.py",
    "content": "import copy\n\nimport torch as th\nimport torch.nn as nn\nfrom comfy.ldm.modules.diffusionmodules.openaimodel import (\n    UNetModel,\n    apply_control,\n    forward_timestep_embed,\n)\nfrom comfy.ldm.modules.diffusionmodules.util import timestep_embedding\n\norigin_forward_timestep_embed = forward_timestep_embed\n\n\nclass ForwardTimestepEmbedModule(th.nn.Module):\n    def __init__(self, ts, transformer_options={}, num_video_frames=None):\n        super().__init__()\n        self.module = ts\n        self.transformer_options = transformer_options\n        self.num_video_frames = num_video_frames\n\n    def forward(\n        self,\n        x,\n        emb,\n        context=None,\n        output_shape_tensor=None,\n        time_context=None,\n        image_only_indicator=None,\n    ):\n        return origin_forward_timestep_embed(\n            self.module,\n            x,\n            emb,\n            context=context,\n            transformer_options=self.transformer_options,\n            output_shape=output_shape_tensor\n            if output_shape_tensor is None\n            else output_shape_tensor.shape,\n            time_context=time_context,\n            num_video_frames=self.num_video_frames,\n            image_only_indicator=image_only_indicator,\n        )\n\n\nclass PatchUNetModel(UNetModel):\n    @staticmethod\n    def cast_from(other):\n        tcls = UNetModel\n        if isinstance(other, tcls):\n            other.__class__ = PatchUNetModel\n            other.patch_init()\n            return other\n        raise ValueError(f\"instance must be {tcls.__qualname__}\")\n\n    def cast_to_base_model(self):\n        self.patch_deinit()\n        self.__class__ = UNetModel\n        return self\n\n    def patch_init(self):\n        self.input_block_patch = nn.ModuleList(\n            [nn.ModuleList() for _ in self.input_blocks]\n        )\n        self.input_block_patch_after_skip = nn.ModuleList(\n            [nn.ModuleList() for _ in self.input_blocks]\n        )\n        self.output_block_patch = nn.ModuleList(\n            [nn.ModuleList() for _ in self.output_blocks]\n        )\n\n    def patch_deinit(self):\n        del self.input_block_patch\n        del self.input_block_patch_after_skip\n        del self.output_block_patch\n\n    def set_patch_module(self, patch_module):\n        if \"input_block_patch\" in patch_module:\n            self.input_block_patch = nn.ModuleList(\n                [\n                    nn.ModuleList(copy.deepcopy(patch_module[\"input_block_patch\"]))\n                    for _ in self.input_blocks\n                ]\n            )\n        if \"input_block_patch_after_skip\" in patch_module:\n            self.input_block_patch_after_skip = nn.ModuleList(\n                [\n                    nn.ModuleList(\n                        copy.deepcopy(patch_module[\"input_block_patch_after_skip\"])\n                    )\n                    for _ in self.input_blocks\n                ]\n            )\n        if \"output_block_patch\" in patch_module:\n            self.output_block_patch = nn.ModuleList(\n                [\n                    nn.ModuleList(copy.deepcopy(patch_module[\"output_block_patch\"]))\n                    for _ in self.output_blocks\n                ]\n            )\n\n    def forward(\n        self,\n        x,\n        timesteps=None,\n        context=None,\n        y=None,\n        control=None,\n        transformer_options={},\n        **kwargs,\n    ):\n        \"\"\"\n        Apply the model to an input batch.\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param context: conditioning plugged in via crossattn\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        transformer_options[\"original_shape\"] = list(x.shape)\n        transformer_options[\"current_index\"] = 0\n        transformer_patches = transformer_options.get(\"patches\", {})\n\n        num_video_frames = kwargs.get(\"num_video_frames\", self.default_num_video_frames)\n        image_only_indicator = kwargs.get(\"image_only_indicator\", None)\n        time_context = kwargs.get(\"time_context\", None)\n\n        assert (y is not None) == (\n            self.num_classes is not None\n        ), \"must specify y if and only if the model is class-conditional\"\n        hs = []\n        t_emb = timestep_embedding(\n            timesteps, self.model_channels, repeat_only=False\n        ).to(self.dtype)\n        emb = self.time_embed(t_emb)\n\n        if self.num_classes is not None:\n            assert y.shape[0] == x.shape[0]\n            emb = emb + self.label_emb(y)\n\n        h = x.type(self.dtype)\n        for id, module in enumerate(self.input_blocks):\n            transformer_options[\"block\"] = (\"input\", id)\n            h = forward_timestep_embed(\n                module,\n                h,\n                emb,\n                context,\n                transformer_options,\n                time_context=time_context,\n                num_video_frames=num_video_frames,\n                image_only_indicator=image_only_indicator,\n            )\n            h = apply_control(h, control, \"input\")\n\n            for patch_id, input_block_patch_module in enumerate(\n                self.input_block_patch[id]\n            ):\n                h = input_block_patch_module(\n                    h,\n                    transformer_patches.get(\"input_block_patch\")[patch_id],\n                    transformer_options,\n                )\n\n            hs.append(h)\n\n            for patch_id, input_block_patch_after_skip_module in enumerate(\n                self.input_block_patch_after_skip[id]\n            ):\n                h = input_block_patch_after_skip_module(\n                    h,\n                    transformer_patches.get(\"input_block_patch_after_skip\")[patch_id],\n                    transformer_options,\n                )\n\n        transformer_options[\"block\"] = (\"middle\", 0)\n        h = forward_timestep_embed(\n            self.middle_block,\n            h,\n            emb,\n            context,\n            transformer_options,\n            time_context=time_context,\n            num_video_frames=num_video_frames,\n            image_only_indicator=image_only_indicator,\n        )\n        h = apply_control(h, control, \"middle\")\n\n        for id, module in enumerate(self.output_blocks):\n            transformer_options[\"block\"] = (\"output\", id)\n            hsp = hs.pop()\n            hsp = apply_control(hsp, control, \"output\")\n\n            for patch_id, output_block_patch_module in enumerate(\n                self.output_block_patch[id]\n            ):\n                h, hsp = output_block_patch_module(\n                    h,\n                    hsp,\n                    transformer_patches.get(\"output_block_patch\")[patch_id],\n                    transformer_options,\n                )\n\n            h = th.cat([h, hsp], dim=1)\n            del hsp\n            if len(hs) > 0:\n                output_shape = hs[-1].shape\n            else:\n                output_shape = None\n            h = forward_timestep_embed(\n                module,\n                h,\n                emb,\n                context,\n                transformer_options,\n                output_shape,\n                time_context=time_context,\n                num_video_frames=num_video_frames,\n                image_only_indicator=image_only_indicator,\n            )\n        h = h.type(x.dtype)\n        if self.predict_codebook_ids:\n            return self.id_predictor(h)\n        else:\n            return self.out(h)\n"
  },
  {
    "path": "module/comfy_trace/sd.py",
    "content": "import torch\n\n\nclass VAEDecodeModule(torch.nn.Module):\n    def __init__(self, module, decode):\n        super().__init__()\n        self.module = module\n        self.decode = decode\n\n    def forward(self, samples):\n        return self.decode(samples)\n"
  },
  {
    "path": "module/comfy_trace_utilities.py",
    "content": "import contextlib\nimport copy\n\nimport torch\n\n\ndef hash_arg(arg):\n    # micro optimization: bool obj is an instance of int\n    if isinstance(arg, (str, int, float, bytes)):\n        return arg\n    if isinstance(arg, (tuple, list)):\n        return tuple(map(hash_arg, arg))\n    if isinstance(arg, dict):\n        return tuple(\n            sorted(\n                ((hash_arg(k), hash_arg(v)) for k, v in arg.items()), key=lambda x: x[0]\n            )\n        )\n    if isinstance(arg, torch.dtype):\n        return str(arg)\n\n    return type(arg)\n\n\nclass ModuleWrapper(torch.nn.Module):\n    def __init__(self, module):\n        super().__init__()\n        self.module = module\n\n    def forward(self, *args, **kwargs):\n        return self.module(*args, **kwargs)\n\n\nclass ModuleFactory:\n    def __init__(self, callable, kwargs) -> None:\n        self.callable = callable\n        self.kwargs = kwargs\n        self.converted_kwargs = self.gen_converted_kwargs()\n\n    def gen_converted_kwargs(self):\n        return self.kwargs\n\n    def get_converted_kwargs(self):\n        return self.converted_kwargs\n\n    def gen_cache_key(self):\n        return (\n            self.callable.__class__.__qualname__,\n            hash_arg(self.kwargs),\n        )\n\n    @contextlib.contextmanager\n    def converted_module_context(self):\n        yield (self.callable, self.converted_kwargs)\n\n    def load_state_dict_to_module(self, script_module):\n        with self.converted_module_context() as (m_model, m_kwargs):\n            script_module.load_state_dict(\n                m_model.state_dict(), strict=False, assign=True\n            )\n        return script_module\n\n\nclass TracerWithCache:\n    cache_map = {}\n\n    @staticmethod\n    def get_traced_module(module_factory: ModuleFactory, device=None):\n        cache_key = module_factory.gen_cache_key()\n\n        if not cache_key in TracerWithCache.cache_map:\n            with module_factory.converted_module_context() as (m_model, m_kwargs):\n                if device != None:\n                    m_model.to(device=device)\n                script_module = torch.jit.trace(\n                    m_model,\n                    example_kwarg_inputs=m_kwargs,\n                    strict=True,\n                    check_trace=True,\n                )\n\n            meta_script_module = script_module.to_empty(device=\"meta\")\n            TracerWithCache.cache_map[cache_key] = meta_script_module\n\n        meta_script_module = copy.deepcopy(TracerWithCache.cache_map[cache_key])\n\n        script_module = module_factory.load_state_dict_to_module(meta_script_module)\n        return script_module\n"
  },
  {
    "path": "module/controlnet_tensorrt.py",
    "content": "from .tensorrt_wrapper import CallableTensorRTEngineWrapper\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeControlNet(\n    CallableTensorRTEngineWrapper\n):\n    args_name = [\"x\", \"hint\", \"timesteps\", \"context\", \"y\"]\n\n    def gen_onnx_args(self, kwargs, module=None):\n        args_name = []\n        args = []\n        for arg_name in self.args_name:\n            args.append(kwargs.get(arg_name, None))\n            if args[-1] != None:\n                args_name.append(arg_name)\n        dynamic_axes = {\n            \"x\": {0: \"B\", 2: \"H\", 3: \"W\"},\n            \"hint\": {0: \"HB\", 2: \"8H\", 3: \"8W\"},\n            \"timesteps\": {0: \"B\"},\n            \"context\": {0: \"B\", 1: \"77E\"},\n        }\n        for k in list(dynamic_axes.keys()):\n            if not k in args_name:\n                dynamic_axes.pop(k)\n        return args, args_name, dynamic_axes\n\n    def gen_tensorrt_args(self, kwargs):\n        input_shape_info = {}\n        feed_dict = {}\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg != None:\n                feed_dict[arg_name] = arg\n                input_shape_info[arg_name] = tuple(arg.shape)\n\n        return feed_dict, input_shape_info\n\n    def gen_tensorrt_args_profile(self, input_shape_info):\n        min_input_profile_info = {\n            \"x\": {0: 1, 2: 8, 3: 8},\n            \"hint\": {0: 1, 2: 64, 3: 64},\n            \"timesteps\": {0: 1},\n            \"context\": {0: 1, 1: 77},\n        }\n        input_profile_info = {}\n        for arg_name, shape_info in input_shape_info.items():\n            min_shape_config = min_input_profile_info.get(arg_name, None)\n            min_shape_info = list(shape_info)\n            if min_shape_config != None:\n                for k, v in min_shape_config.items():\n                    min_shape_info[k] = v\n            input_profile_info[arg_name] = [\n                tuple(min_shape_info),\n                shape_info,\n                shape_info,\n            ]\n\n        return input_profile_info\n\n    def gen_onnx_outputs(self, module):\n        outputs_name = []\n        for i in range(len(module.input_blocks) + 1):\n            outputs_name.append(f\"output_{i}\")\n        self.outputs_name = outputs_name\n        return outputs_name\n\n    def gen_tensorrt_outputs(self, output_map):\n        output = []\n        for output_name in self.outputs_name:\n            output.append(output_map[output_name])\n        return output\n"
  },
  {
    "path": "module/model_base_tensorrt.py",
    "content": "import torch\n\nfrom .tensorrt_wrapper import CallableTensorRTEngineWrapper\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeBaseModelApplyModel(\n    CallableTensorRTEngineWrapper\n):\n    args_name = [\n        \"input_x\",\n        \"timestep\",\n        \"c_concat\",\n        \"c_crossattn\",\n        \"y\",\n        \"control\",\n    ]\n\n    def gen_onnx_args(self, kwargs, module=None):\n        dynamic_axes = {\n            \"input_x\": {0: \"B\", 2: \"H\", 3: \"W\"},\n            \"timestep\": {0: \"B\"},\n            \"c_crossattn\": {0: \"B\", 1: \"E\"},\n            \"y\": {0: \"B\"},\n        }\n        args_name = []\n        args = []\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg is not None or not isinstance(\n                module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)\n            ):\n                args.append(arg)\n                if arg is not None:\n                    if arg_name == \"control\":\n                        control_params = arg\n                        for key in control_params:\n                            for i, v in enumerate(control_params[key]):\n                                control_params_name = f\"{arg_name}_{key}_{i}\"\n                                args_name.append(control_params_name)\n                                dynamic_axes[control_params_name] = {\n                                    0: \"B\",\n                                    2: f\"{control_params_name}_H\",\n                                    3: f\"{control_params_name}_W\",\n                                }\n                    else:\n                        args_name.append(arg_name)\n        if not isinstance(module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)):\n            args.append({})\n        for k in list(dynamic_axes.keys()):\n            if not k in args_name:\n                dynamic_axes.pop(k)\n        return args, args_name, dynamic_axes\n\n    def gen_tensorrt_args(self, kwargs):\n        input_shape_info = {}\n        feed_dict = {}\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg != None:\n                if arg_name == \"control\":\n                    control_params = arg\n                    for key in control_params:\n                        for i, v in enumerate(control_params[key]):\n                            control_params_name = f\"{arg_name}_{key}_{i}\"\n                            feed_dict[control_params_name] = v\n                            input_shape_info[control_params_name] = tuple(v.shape)\n                else:\n                    feed_dict[arg_name] = arg\n                    input_shape_info[arg_name] = tuple(arg.shape)\n\n        return feed_dict, input_shape_info\n\n    def gen_tensorrt_args_profile(self, input_shape_info):\n        min_input_profile_info = {\n            \"input_x\": {0: 1, 2: 2, 3: 2},\n            \"timestep\": {0: 1},\n            \"c_crossattn\": {0: 1, 1: 77},\n            \"y\": {0: 1},\n        }\n        input_profile_info = {}\n        for arg_name, shape_info in input_shape_info.items():\n            if arg_name.startswith(\"control\"):\n                min_shape_config = {0: 1, 2: 1, 3: 1}\n            else:\n                min_shape_config = min_input_profile_info.get(arg_name, None)\n            min_shape_info = list(shape_info)\n            if min_shape_config != None:\n                for k, v in min_shape_config.items():\n                    min_shape_info[k] = v\n            input_profile_info[arg_name] = [\n                tuple(min_shape_info),\n                shape_info,\n                shape_info,\n            ]\n\n        return input_profile_info\n"
  },
  {
    "path": "module/onnx_module_refit.py",
    "content": "import logging\nfrom collections import OrderedDict\nfrom dataclasses import asdict, dataclass\n\nimport onnx\nimport torch\nfrom onnx import helper, numpy_helper\n\n_logger = logging.getLogger(__name__)\n\n\n@dataclass\nclass ParamsDictGenMapValue:\n    op: str\n    args: list\n\n\ndef make_module_onnx_tensor_gen_map_by_params_dict(\n    module: torch.nn.Module, params_dict: dict[str, torch.Tensor]\n):\n    params_dict_gen_map = {}\n\n    params_dict_dataptr_map = {v.data_ptr(): k for k, v in params_dict.items()}\n\n    not_found_state_dict_list = []\n    for k, v in module.state_dict().items():\n        if v.data_ptr() in params_dict_dataptr_map:\n            params_dict_key = params_dict_dataptr_map[v.data_ptr()]\n            assert params_dict_key not in params_dict_gen_map\n            if params_dict[params_dict_key].shape == v.shape:\n                params_dict_gen_map[params_dict_key] = asdict(\n                    ParamsDictGenMapValue(\"rename\", [k])\n                )\n                # torch.testing.assert_close()\n            elif params_dict[params_dict_key].squeeze().shape == v.shape:\n                params_dict_gen_map[params_dict_key] = asdict(\n                    ParamsDictGenMapValue(\n                        \"reshape\", [k, list(params_dict[params_dict_key].shape)]\n                    )\n                )\n                # torch.testing.assert_close()\n            elif params_dict[params_dict_key].transpose(0, 1).shape == v.shape:\n                params_dict_gen_map[params_dict_key] = asdict(\n                    ParamsDictGenMapValue(\"transpose\", [k, [0, 1]])\n                )\n                # torch.testing.assert_close()\n            else:\n                assert False, (\n                    k,\n                    v.shape,\n                    params_dict_key,\n                    params_dict[params_dict_key].shape,\n                )\n        else:\n            not_found_state_dict_list.append(k)\n\n    not_found_key_set = set(params_dict.keys()) - set(params_dict_gen_map.keys())\n    for not_found_key in not_found_key_set:\n        _logger.warning(not_found_key)\n    assert len(not_found_key_set) == 0\n    return params_dict_gen_map\n\n\ndef make_module_onnx_tensor_gen_map_by_onnx_model(\n    module: torch.nn.Module,\n    onnx_model: str,\n) -> dict:\n    # TODO\n\n    return params_dict_gen_map\n\n\ndef make_params_dict_by_module(\n    module: torch.nn.Module, params_dict_gen_map: dict[str, dict]\n):\n    params_dict = {}\n\n    module_state_dict: dict[str, torch.Tensor] = module.state_dict()\n\n    op_map = {\n        \"rename\": lambda name: module_state_dict[name],\n        \"reshape\": lambda name, shape: module_state_dict[name].reshape(tuple(shape)),\n        \"transpose\": lambda name, dims: module_state_dict[name].transpose(*dims),\n    }\n\n    for k, v in params_dict_gen_map.items():\n        op = v[\"op\"]\n        args = v[\"args\"]\n\n        params_dict[k] = op_map[op](*args)\n\n    return params_dict\n\n\ndef make_constant_params_dict_by_onnx_model(\n    onnx_model_path,\n):\n    constant_params_dict = {}\n\n    onnx_model = onnx.load(onnx_model_path)\n    for node in onnx_model.graph.node:\n        if node.op_type == \"Constant\":\n            for output in node.output:\n                if \"Constant\" in output:\n                    attrs = OrderedDict(\n                        (a.name, helper.get_attribute_value(a)) for a in node.attribute\n                    )\n                    ndarry = numpy_helper.to_array(attrs[\"value\"])\n                    try:\n                        constant_params_dict[output] = torch.Tensor(ndarry.copy())\n                    except Exception:\n                        print(output, ndarry)\n                        continue\n\n    return constant_params_dict\n"
  },
  {
    "path": "module/openaimodel_tensorrt.py",
    "content": "from dataclasses import dataclass, field\nfrom typing import Dict\n\nimport comfy.ldm.modules.diffusionmodules.openaimodel\nimport comfy.model_management\nimport comfy.model_patcher\nimport torch\nimport torch as th\nimport yaml\n\nfrom .comfy_trace.openaimodel import (\n    ForwardTimestepEmbedModule,\n    origin_forward_timestep_embed,\n)\nfrom .tensorrt_wrapper import CallableTensorRTEngineWrapper, TensorRTEngineContext\n\nTENSORRT_CONTEXT_KEY = \"tensorrt_context\"\n\n\n@dataclass\nclass TensorRTEngineBlockContext:\n    block_cache: Dict[str, CallableTensorRTEngineWrapper] = field(\n        default_factory=lambda: {}\n    )\n    tensorrt_context: TensorRTEngineContext = field(\n        default_factory=lambda: TensorRTEngineContext()\n    )\n\n    def dump_input_profile_info(self):\n        input_shape_info_map = {}\n        for key in sorted(self.block_cache):\n            input_shape_info_map[key] = self.block_cache[key].input_shape_info\n        print(yaml.safe_dump(input_shape_info_map))\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeForwardTimestep(\n    CallableTensorRTEngineWrapper\n):\n    args_name = [\n        \"x\",\n        \"emb\",\n        \"context\",\n        \"output_shape_tensor\",\n        \"time_context\",\n        \"image_only_indicator\",\n    ]\n\n    def gen_onnx_args(self, kwargs, module=None):\n        args_name = []\n        args = []\n        for arg_name in self.args_name:\n            args.append(kwargs.get(arg_name, None))\n            if args[-1] is not None:\n                args_name.append(arg_name)\n        dynamic_axes = {\n            \"x\": {0: \"B\", 2: \"H\", 3: \"W\"},\n            \"emb\": {0: \"B\"},\n            \"context\": {0: \"B\", 1: \"E\"},\n            \"output_shape_tensor\": {0: \"B\", 2: \"OH\", 3: \"OW\"},\n        }\n        for k in list(dynamic_axes.keys()):\n            if k not in args_name:\n                dynamic_axes.pop(k)\n        return args, args_name, dynamic_axes\n\n    def gen_tensorrt_args(self, kwargs):\n        input_shape_info = {}\n        feed_dict = {}\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg is not None:\n                feed_dict[arg_name] = arg\n                input_shape_info[arg_name] = tuple(arg.shape)\n\n        return feed_dict, input_shape_info\n\n    def gen_tensorrt_args_profile(self, input_shape_info):\n        min_input_profile_info = {\n            \"x\": {0: 1, 2: 1, 3: 1},\n            \"emb\": {0: 1},\n            \"context\": {0: 1, 1: 77},\n            \"output_shape_tensor\": {0: 1, 2: 1, 3: 1},\n        }\n        input_profile_info = {}\n        for arg_name, shape_info in input_shape_info.items():\n            min_shape_config = min_input_profile_info.get(arg_name, None)\n            min_shape_info = list(shape_info)\n            if min_shape_config is not None:\n                for k, v in min_shape_config.items():\n                    min_shape_info[k] = v\n            input_profile_info[arg_name] = [\n                tuple(min_shape_info),\n                shape_info,\n                shape_info,\n            ]\n\n        return input_profile_info\n\n\ndef hook_forward_timestep_embed(\n    ts,\n    x,\n    emb,\n    context=None,\n    transformer_options={},\n    output_shape=None,\n    time_context=None,\n    num_video_frames=None,\n    image_only_indicator=None,\n):\n    module = ForwardTimestepEmbedModule(ts, transformer_options, num_video_frames)\n    tensorrt_block_context: TensorRTEngineBlockContext = transformer_options.get(\n        TENSORRT_CONTEXT_KEY, None\n    )\n    if tensorrt_block_context != None:\n        block_key = str(transformer_options[\"block\"])\n        block = tensorrt_block_context.block_cache.get(block_key, None)\n        if block is None:\n            tensorrt_block_context.block_cache[block_key] = (\n                CallableTensorRTEngineWrapperDynamicShapeForwardTimestep(\n                    tensorrt_block_context.tensorrt_context, block_key\n                )\n            )\n        return tensorrt_block_context.block_cache[block_key](\n            module,\n            x=x,\n            emb=emb,\n            context=context,\n            output_shape_tensor=output_shape\n            if output_shape is None\n            else th.empty((output_shape), device=x.device, dtype=x.dtype),\n            time_context=time_context,\n            image_only_indicator=image_only_indicator,\n        )\n    return module(x, emb, context, time_context, image_only_indicator)\n\n\ndef do_hook_forward_timestep_embed():\n    comfy.ldm.modules.diffusionmodules.openaimodel.forward_timestep_embed = (\n        hook_forward_timestep_embed\n    )\n\n\ndef undo_hook_forward_timestep_embed():\n    comfy.ldm.modules.diffusionmodules.openaimodel.forward_timestep_embed = (\n        origin_forward_timestep_embed\n    )\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeUNetModelForward(\n    CallableTensorRTEngineWrapper\n):\n    args_name = [\n        \"x\",\n        \"timesteps\",\n        \"context\",\n        \"y\",\n        \"control\",\n    ]\n\n    def gen_onnx_args(self, kwargs, module=None):\n        dynamic_axes = {\n            \"x\": {0: \"B\", 2: \"H\", 3: \"W\"},\n            \"timesteps\": {0: \"B\"},\n            \"context\": {0: \"B\", 1: \"E\"},\n            \"y\": {0: \"B\"},\n        }\n        args_name = []\n        args = []\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg is not None or not isinstance(\n                module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)\n            ):\n                args.append(arg)\n                if arg is not None:\n                    if arg_name == \"control\":\n                        control_params = arg\n                        for key in control_params:\n                            for i, v in enumerate(control_params[key]):\n                                control_params_name = f\"{arg_name}_{key}_{i}\"\n                                args_name.append(control_params_name)\n                                dynamic_axes[control_params_name] = {\n                                    0: \"B\",\n                                    2: f\"{control_params_name}_H\",\n                                    3: f\"{control_params_name}_W\",\n                                }\n                    else:\n                        args_name.append(arg_name)\n        if not isinstance(module, (torch.jit.ScriptFunction, torch.jit.ScriptModule)):\n            args.append({})\n        for k in list(dynamic_axes.keys()):\n            if k not in args_name:\n                dynamic_axes.pop(k)\n        return args, args_name, dynamic_axes\n\n    def gen_tensorrt_args(self, kwargs):\n        input_shape_info = {}\n        feed_dict = {}\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg is not None:\n                if arg_name == \"control\":\n                    control_params = arg\n                    for key in control_params:\n                        for i, v in enumerate(control_params[key]):\n                            control_params_name = f\"{arg_name}_{key}_{i}\"\n                            feed_dict[control_params_name] = v\n                            input_shape_info[control_params_name] = tuple(v.shape)\n                else:\n                    feed_dict[arg_name] = arg\n                    input_shape_info[arg_name] = tuple(arg.shape)\n\n        return feed_dict, input_shape_info\n\n    def gen_tensorrt_args_profile(self, input_shape_info):\n        min_input_profile_info = {\n            \"x\": {0: 1, 2: 2, 3: 2},\n            \"timesteps\": {0: 1},\n            \"context\": {0: 1, 1: 77},\n            \"y\": {0: 1},\n        }\n        input_profile_info = {}\n        for arg_name, shape_info in input_shape_info.items():\n            if arg_name.startswith(\"control\"):\n                min_shape_config = {0: 1, 2: 1, 3: 1}\n            else:\n                min_shape_config = min_input_profile_info.get(arg_name, None)\n            min_shape_info = list(shape_info)\n            if min_shape_config is not None:\n                for k, v in min_shape_config.items():\n                    min_shape_info[k] = v\n            input_profile_info[arg_name] = [\n                tuple(min_shape_info),\n                shape_info,\n                shape_info,\n            ]\n\n        return input_profile_info\n"
  },
  {
    "path": "module/patched_onnx_export/utils_2_4_0.py",
    "content": "# mypy: allow-untyped-defs\n\"\"\"Functions to export models into the ONNX IR format.\n\nThese models can be loaded with the ONNX library and then\nconverted to models which run on other deep learning frameworks.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport contextlib\nimport copy\nimport inspect\nimport io\nimport re\nimport textwrap\nimport typing\nimport warnings\nfrom typing import (\n    Any,\n    Callable,\n    Collection,\n    Dict,\n    List,\n    Mapping,\n    Optional,\n    Sequence,\n    Set,\n    Tuple,\n    Type,\n    Union,\n    cast,\n)\n\nimport torch\nimport torch._C._onnx as _C_onnx\nimport torch.jit._trace\nimport torch.serialization\nfrom torch import _C\nfrom torch.onnx import (  # noqa: F401\n    _constants,\n    _exporter_states,\n    errors,\n    symbolic_caffe2,\n    symbolic_helper,\n)\nfrom torch.onnx._globals import GLOBALS\nfrom torch.onnx._internal import (\n    _beartype,\n    diagnostics,\n    jit_utils,\n    onnx_proto_utils,\n    registration,\n)\n\n__all__ = [\n    \"is_in_onnx_export\",\n    \"select_model_mode_for_export\",\n    \"disable_apex_o2_state_dict_hook\",\n    \"setup_onnx_logging\",\n    \"exporter_context\",\n    \"export\",\n    \"model_signature\",\n    \"warn_on_static_input_change\",\n    \"unpack_quantized_tensor\",\n    \"export_to_pretty_string\",\n    \"unconvertible_ops\",\n    \"register_custom_op_symbolic\",\n    \"unregister_custom_op_symbolic\",\n]\n\n\ndef is_in_onnx_export() -> bool:\n    \"\"\"Returns whether it is in the middle of ONNX export.\"\"\"\n    return GLOBALS.in_onnx_export\n\n\n# TODO(justinchuby): Remove dependency to this global variable from constant_fold.cpp\n# Skip check due to cannot import IValue from torch._C\n_params_dict = {}  # type: ignore[var-annotated]\n\n\n@contextlib.contextmanager\n@_beartype.beartype\ndef select_model_mode_for_export(model, mode: _C_onnx.TrainingMode):\n    r\"\"\"A context manager to temporarily set the training mode of ``model``\n    to ``mode``, resetting it when we exit the with-block.\n\n    Args:\n        model: Same type and meaning as ``model`` arg to :func:`export`.\n        mode: Same type and meaning as ``training`` arg to :func:`export`.\n    \"\"\"\n    if not isinstance(mode, _C_onnx.TrainingMode):\n        raise TypeError(\n            f\"'mode' should be a torch.onnx.TrainingMode enum, but got '{type(mode)}'.\"\n        )\n    originally_training: bool = False\n\n    if hasattr(model, \"training\"):\n        originally_training = model.training\n\n        # ONNX opset 12 has better support for training amenable models, with updated\n        # versions of the dropout and batch_norm operators\n        if mode == _C_onnx.TrainingMode.TRAINING or (\n            mode == _C_onnx.TrainingMode.PRESERVE and originally_training\n        ):\n            GLOBALS.export_training = True\n            if GLOBALS.export_onnx_opset_version < 12:\n                warnings.warn(\n                    \"You are exporting the model in training mode with onnx opset \"\n                    f\"version {GLOBALS.export_onnx_opset_version}. \"\n                    \"Opset versions lower than opset 12 will not be able to export \"\n                    \"nodes such as Dropout and BatchNorm correctly.\"\n                )\n        else:\n            GLOBALS.export_training = False\n\n        GLOBALS.training_mode = mode\n        if mode == _C_onnx.TrainingMode.TRAINING:\n            model.train(True)\n        elif mode == _C_onnx.TrainingMode.EVAL:\n            model.train(False)\n        # else mode == _C_onnx.TrainingMode.PRESERVE, do nothing\n\n    try:\n        yield\n    finally:\n        if hasattr(model, \"training\") and not mode == _C_onnx.TrainingMode.PRESERVE:\n            model.train(originally_training)\n\n\n@contextlib.contextmanager\n@_beartype.beartype\ndef disable_apex_o2_state_dict_hook(\n    model: Union[torch.nn.Module, torch.jit.ScriptFunction],\n):\n    # Apex O2 hook state_dict to return fp16 weights as fp32.\n    # Exporter cannot identify them as same tensors.\n    # Since this hook is only used by optimizer, it is safe to\n    # remove this hook while exporting.\n    if not isinstance(model, torch.jit.ScriptFunction):\n        model_hooks = {}  # type: ignore[var-annotated]\n        for module in model.modules():\n            for key, hook in module._state_dict_hooks.items():\n                if type(hook).__name__ == \"O2StateDictHook\":\n                    if module not in model_hooks:\n                        model_hooks[module] = {}\n                    model_hooks[module][key] = hook\n            if module in model_hooks:\n                for key in model_hooks[module]:\n                    module._state_dict_hooks.pop(key)\n        try:\n            yield\n        finally:\n            # Add the hooks back\n            for module, m_map in model_hooks.items():\n                for key, hook in m_map.items():\n                    module._state_dict_hooks[key] = hook\n    else:\n        try:\n            yield\n        finally:\n            pass\n\n\n@contextlib.contextmanager\n@_beartype.beartype\ndef setup_onnx_logging(verbose: bool):\n    is_originally_enabled = torch.onnx.is_onnx_log_enabled()\n    if is_originally_enabled or verbose:\n        torch.onnx.enable_log()\n    try:\n        yield\n    finally:\n        if not is_originally_enabled:\n            torch.onnx.disable_log()\n\n\n@contextlib.contextmanager\n@_beartype.beartype\ndef exporter_context(model, mode: _C_onnx.TrainingMode, verbose: bool):\n    with select_model_mode_for_export(\n        model, mode\n    ) as mode_ctx, disable_apex_o2_state_dict_hook(\n        model\n    ) as apex_ctx, setup_onnx_logging(\n        verbose\n    ) as log_ctx, diagnostics.create_export_diagnostic_context() as diagnostic_ctx:\n        yield (mode_ctx, apex_ctx, log_ctx, diagnostic_ctx)\n\n\ndef export(\n    model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],\n    args: Union[Tuple[Any, ...], torch.Tensor],\n    f: Optional[Union[str, io.BytesIO]] = None,\n    export_params: bool = True,\n    verbose: bool = False,\n    training: _C_onnx.TrainingMode = _C_onnx.TrainingMode.EVAL,\n    input_names: Optional[Sequence[str]] = None,\n    output_names: Optional[Sequence[str]] = None,\n    operator_export_type: _C_onnx.OperatorExportTypes = _C_onnx.OperatorExportTypes.ONNX,\n    opset_version: Optional[int] = None,\n    do_constant_folding: bool = True,\n    dynamic_axes: Optional[\n        Union[Mapping[str, Mapping[int, str]], Mapping[str, Sequence[int]]]\n    ] = None,\n    keep_initializers_as_inputs: Optional[bool] = None,\n    custom_opsets: Optional[Mapping[str, int]] = None,\n    export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,\n    autograd_inlining: Optional[bool] = True,\n    dynamo: bool = False,\n) -> Optional[torch.onnx.ONNXProgram]:\n    r\"\"\"Exports a model into ONNX format.\n\n    If ``model`` is not a :class:`torch.jit.ScriptModule` nor a\n    :class:`torch.jit.ScriptFunction`, this runs\n    ``model`` once in order to convert it to a TorchScript graph to be exported\n    (the equivalent of :func:`torch.jit.trace`). Thus this has the same limited support\n    for dynamic control flow as :func:`torch.jit.trace`.\n\n    Args:\n        model (:class:`torch.nn.Module`, :class:`torch.jit.ScriptModule` or :class:`torch.jit.ScriptFunction`):\n            the model to be exported.\n        args (tuple or torch.Tensor):\n\n            args can be structured either as:\n\n            1. ONLY A TUPLE OF ARGUMENTS::\n\n                args = (x, y, z)\n\n            The tuple should contain model inputs such that ``model(*args)`` is a valid\n            invocation of the model. Any non-Tensor arguments will be hard-coded into the\n            exported model; any Tensor arguments will become inputs of the exported model,\n            in the order they occur in the tuple.\n\n            2. A TENSOR::\n\n                args = torch.Tensor([1])\n\n            This is equivalent to a 1-ary tuple of that Tensor.\n\n            3. A TUPLE OF ARGUMENTS ENDING WITH A DICTIONARY OF NAMED ARGUMENTS::\n\n                args = (\n                    x,\n                    {\n                        \"y\": input_y,\n                        \"z\": input_z\n                    }\n                )\n\n            All but the last element of the tuple will be passed as non-keyword arguments,\n            and named arguments will be set from the last element. If a named argument is\n            not present in the dictionary, it is assigned the default value, or None if a\n            default value is not provided.\n\n            .. note::\n                If a dictionary is the last element of the args tuple, it will be\n                interpreted as containing named arguments. In order to pass a dict as the\n                last non-keyword arg, provide an empty dict as the last element of the args\n                tuple. For example, instead of::\n\n                    torch.onnx.export(\n                        model,\n                        (\n                            x,\n                            # WRONG: will be interpreted as named arguments\n                            {y: z}\n                        ),\n                        \"test.onnx.pb\"\n                    )\n\n                Write::\n\n                    torch.onnx.export(\n                        model,\n                        (\n                            x,\n                            {y: z},\n                            {}\n                        ),\n                        \"test.onnx.pb\"\n                    )\n\n        f: a file-like object (such that ``f.fileno()`` returns a file descriptor)\n            or a string containing a file name.  A binary protocol buffer will be written\n            to this file.\n        export_params (bool, default True): if True, all parameters will\n            be exported. Set this to False if you want to export an untrained model.\n            In this case, the exported model will first take all of its parameters\n            as arguments, with the ordering as specified by ``model.state_dict().values()``\n        verbose (bool, default False): if True, prints a description of the\n            model being exported to stdout. In addition, the final ONNX graph will include the\n            field ``doc_string``` from the exported model which mentions the source code locations\n            for ``model``. If True, ONNX exporter logging will be turned on.\n        training (enum, default TrainingMode.EVAL):\n            * ``TrainingMode.EVAL``: export the model in inference mode.\n            * ``TrainingMode.PRESERVE``: export the model in inference mode if model.training is\n                False and in training mode if model.training is True.\n            * ``TrainingMode.TRAINING``: export the model in training mode. Disables optimizations\n                which might interfere with training.\n        input_names (list of str, default empty list): names to assign to the\n            input nodes of the graph, in order.\n        output_names (list of str, default empty list): names to assign to the\n            output nodes of the graph, in order.\n        operator_export_type (enum, default OperatorExportTypes.ONNX):\n\n            * ``OperatorExportTypes.ONNX``: Export all ops as regular ONNX ops\n                (in the default opset domain).\n            * ``OperatorExportTypes.ONNX_FALLTHROUGH``: Try to convert all ops\n                to standard ONNX ops in the default opset domain. If unable to do so\n                (e.g. because support has not been added to convert a particular torch op to ONNX),\n                fall back to exporting the op into a custom opset domain without conversion. Applies\n                to `custom ops <https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html>`_\n                as well as ATen ops. For the exported model to be usable, the runtime must support\n                these non-standard ops.\n            * ``OperatorExportTypes.ONNX_ATEN``: All ATen ops (in the TorchScript namespace \"aten\")\n                are exported as ATen ops (in opset domain \"org.pytorch.aten\").\n                `ATen <https://pytorch.org/cppdocs/#aten>`_ is PyTorch's built-in tensor library, so\n                this instructs the runtime to use PyTorch's implementation of these ops.\n\n                .. warning::\n\n                    Models exported this way are probably runnable only by Caffe2.\n\n                    This may be useful if the numeric differences in implementations of operators are\n                    causing large differences in behavior between PyTorch and Caffe2 (which is more\n                    common on untrained models).\n\n            * ``OperatorExportTypes.ONNX_ATEN_FALLBACK``: Try to export each ATen op\n                (in the TorchScript namespace \"aten\") as a regular ONNX op. If we are unable to do so\n                (e.g. because support has not been added to convert a particular torch op to ONNX),\n                fall back to exporting an ATen op. See documentation on OperatorExportTypes.ONNX_ATEN for\n                context.\n                For example::\n\n                    graph(%0 : Float):\n                    %3 : int = prim::Constant[value=0]()\n                    # conversion unsupported\n                    %4 : Float = aten::triu(%0, %3)\n                    # conversion supported\n                    %5 : Float = aten::mul(%4, %0)\n                    return (%5)\n\n                Assuming ``aten::triu`` is not supported in ONNX, this will be exported as::\n\n                    graph(%0 : Float):\n                    %1 : Long() = onnx::Constant[value={0}]()\n                    # not converted\n                    %2 : Float = aten::ATen[operator=\"triu\"](%0, %1)\n                    # converted\n                    %3 : Float = onnx::Mul(%2, %0)\n                    return (%3)\n\n                .. warning::\n\n                    Models exported this way are probably runnable only by Caffe2.\n\n        opset_version (int, default 17): The version of the\n            `default (ai.onnx) opset <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_\n            to target. Must be >= 7 and <= 17.\n        do_constant_folding (bool, default True): Apply the constant-folding optimization.\n            Constant-folding will replace some of the ops that have all constant inputs\n            with pre-computed constant nodes.\n        dynamic_axes (dict[string, dict[int, string]] or dict[string, list(int)], default empty dict):\n\n            By default the exported model will have the shapes of all input and output tensors\n            set to exactly match those given in ``args``. To specify axes of tensors as\n            dynamic (i.e. known only at run-time), set ``dynamic_axes`` to a dict with schema:\n\n            * KEY (str): an input or output name. Each name must also be provided in ``input_names`` or\n                ``output_names``.\n            * VALUE (dict or list): If a dict, keys are axis indices and values are axis names. If a\n                list, each element is an axis index.\n\n            For example::\n\n                class SumModule(torch.nn.Module):\n                    def forward(self, x):\n                        return torch.sum(x, dim=1)\n\n                torch.onnx.export(\n                    SumModule(),\n                    (torch.ones(2, 2),),\n                    \"onnx.pb\",\n                    input_names=[\"x\"],\n                    output_names=[\"sum\"]\n                )\n\n            Produces::\n\n                input {\n                  name: \"x\"\n                  ...\n                      shape {\n                        dim {\n                          dim_value: 2  # axis 0\n                        }\n                        dim {\n                          dim_value: 2  # axis 1\n                ...\n                output {\n                  name: \"sum\"\n                  ...\n                      shape {\n                        dim {\n                          dim_value: 2  # axis 0\n                ...\n\n            While::\n\n                torch.onnx.export(\n                    SumModule(),\n                    (torch.ones(2, 2),),\n                    \"onnx.pb\",\n                    input_names=[\"x\"],\n                    output_names=[\"sum\"],\n                    dynamic_axes={\n                        # dict value: manually named axes\n                        \"x\": {0: \"my_custom_axis_name\"},\n                        # list value: automatic names\n                        \"sum\": [0],\n                    }\n                )\n\n            Produces::\n\n                input {\n                  name: \"x\"\n                  ...\n                      shape {\n                        dim {\n                          dim_param: \"my_custom_axis_name\"  # axis 0\n                        }\n                        dim {\n                          dim_value: 2  # axis 1\n                ...\n                output {\n                  name: \"sum\"\n                  ...\n                      shape {\n                        dim {\n                          dim_param: \"sum_dynamic_axes_1\"  # axis 0\n                ...\n\n        keep_initializers_as_inputs (bool, default None): If True, all the\n            initializers (typically corresponding to parameters) in the\n            exported graph will also be added as inputs to the graph. If False,\n            then initializers are not added as inputs to the graph, and only\n            the non-parameter inputs are added as inputs.\n            This may allow for better optimizations (e.g. constant folding) by\n            backends/runtimes.\n\n            If True, `deduplicate_initializers` pass will not be executed. This means\n            initializers with duplicated values will not be deduplicated and\n            will be treated as distinct inputs to the graph. This allows different\n            input initializers to be supplied at the runtime following export.\n\n            If ``opset_version < 9``, initializers MUST be part of graph\n            inputs and this argument will be ignored and the behavior will be\n            equivalent to setting this argument to True.\n\n            If None, then the behavior is chosen automatically as follows:\n\n            * If ``operator_export_type=OperatorExportTypes.ONNX``, the behavior is equivalent\n                to setting this argument to False.\n            * Else, the behavior is equivalent to setting this argument to True.\n\n        custom_opsets (dict[str, int], default empty dict): A dict with schema:\n\n            * KEY (str): opset domain name\n            * VALUE (int): opset version\n\n            If a custom opset is referenced by ``model`` but not mentioned in this dictionary,\n            the opset version is set to 1. Only custom opset domain name and version should be\n            indicated through this argument.\n\n        export_modules_as_functions (bool or set of type of nn.Module, default False): Flag to enable\n            exporting all ``nn.Module`` forward calls as local functions in ONNX. Or a set to indicate the\n            particular types of modules to export as local functions in ONNX.\n            This feature requires ``opset_version`` >= 15, otherwise the export will fail. This is because\n            ``opset_version`` < 15 implies IR version < 8, which means no local function support.\n            Module variables will be exported as function attributes. There are two categories of function\n            attributes.\n\n            1. Annotated attributes: class variables that have type annotations via\n            `PEP 526-style <https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations>`_\n            will be exported as attributes.\n            Annotated attributes are not used inside the subgraph of ONNX local function because\n            they are not created by PyTorch JIT tracing, but they may be used by consumers\n            to determine whether or not to replace the function with a particular fused kernel.\n\n            2. Inferred attributes: variables that are used by operators inside the module. Attribute names\n            will have prefix \"inferred::\". This is to differentiate from predefined attributes retrieved from\n            python module annotations. Inferred attributes are used inside the subgraph of ONNX local function.\n\n            * ``False`` (default): export ``nn.Module`` forward calls as fine grained nodes.\n            * ``True``: export all ``nn.Module`` forward calls as local function nodes.\n            * Set of type of nn.Module: export ``nn.Module`` forward calls as local function nodes,\n                only if the type of the ``nn.Module`` is found in the set.\n\n        autograd_inlining (bool, default True): Flag used to control whether to inline autograd functions.\n            Refer to https://github.com/pytorch/pytorch/pull/74765 for more details.\n\n        dynamo (bool, default False): Whether to export the model with Dynamo instead of TorchScript.\n\n    Raises:\n        :class:`torch.onnx.errors.CheckerError`: If the ONNX checker detects an invalid ONNX graph.\n        :class:`torch.onnx.errors.UnsupportedOperatorError`: If the ONNX graph cannot be exported because it\n            uses an operator that is not supported by the exporter.\n        :class:`torch.onnx.errors.OnnxExporterError`: Other errors that can occur during export.\n            All errors are subclasses of :class:`errors.OnnxExporterError`.\n    \"\"\"\n\n    if dynamo:\n        # Unsupported parameters for dynamo export\n        # TODO: These are not supported AT THE TIME\n        warnings.warn(\n            \"f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, \"\n            \"do_constant_folding, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions, and \"\n            \"autograd_inlining are not supported for dynamo export at the moment.\"\n        )\n        # TODO: check args normalization\n        args = _decide_input_format(model, args)\n        kwargs = {}\n        if args is not None and isinstance(args[-1], dict):\n            kwargs = args[-1]\n            args = args[:-1]\n        # TODO: refactor this when we have migrated ExportedProgam and\n        # needs users to specify dynamic_axes\n        if dynamic_axes is None or not isinstance(dynamic_axes, dict):\n            dynamic_shapes = False\n        else:\n            dynamic_shapes = True\n            warnings.warn(\n                \"Specified dynamic axes is not supported for dynamo export at the moment.\"\n            )\n        # TODO: expose more ExportOptions?\n        export_options = torch.onnx.ExportOptions(dynamic_shapes=dynamic_shapes)\n        onnx_program = torch.onnx.dynamo_export(\n            model, *args, **kwargs, export_options=export_options\n        )\n        if f is not None:\n            onnx_program.save(f)\n        return onnx_program\n\n    if f is None:\n        raise ValueError(\n            \"Export destination must be specified for torchscript-onnx export.\"\n        )\n\n    return _export(\n        model,\n        args,\n        f,\n        export_params,\n        verbose,\n        training,\n        input_names,\n        output_names,\n        operator_export_type=operator_export_type,\n        opset_version=opset_version,\n        do_constant_folding=do_constant_folding,\n        dynamic_axes=dynamic_axes,\n        keep_initializers_as_inputs=keep_initializers_as_inputs,\n        custom_opsets=custom_opsets,\n        export_modules_as_functions=export_modules_as_functions,\n        autograd_inlining=autograd_inlining,\n    )\n\n\n@_beartype.beartype\ndef _is_constant_tensor_list(node):\n    if node.kind() != \"prim::Constant\":\n        return False\n    output_type = node.output().type()\n    if output_type.isSubtypeOf(_C.ListType.ofTensors()):\n        return True\n    if output_type.isSubtypeOf(_C.ListType(_C.OptionalType.ofTensor())):\n        return True\n\n\n# ONNX can't handle constants that are lists of tensors, which can\n# get generated in constant prop. So we split them back into prim::ListConstructs\n\n\n@_beartype.beartype\ndef _split_tensor_list_constants(g, block):\n    for node in block.nodes():\n        for subblock in node.blocks():\n            _split_tensor_list_constants(g, subblock)\n        if _is_constant_tensor_list(node):\n            inputs = []\n            for val in node.output().toIValue():\n                input = g.insertConstant(val)\n                input.node().moveBefore(node)\n                input.node().copyMetadata(node)\n                inputs.append(input)\n\n            lc = (\n                g.create(\"prim::ListConstruct\", inputs)\n                .insertBefore(node)\n                .output()\n                .setType(_C.ListType.ofTensors())\n            )\n            lc.node().copyMetadata(node)\n            node.output().replaceAllUsesWith(lc)\n\n\n@_beartype.beartype\ndef _optimize_graph(\n    graph: _C.Graph,\n    operator_export_type: _C_onnx.OperatorExportTypes,\n    _disable_torch_constant_prop: bool = False,\n    fixed_batch_size: bool = False,\n    params_dict=None,\n    dynamic_axes=None,\n    input_names=None,\n    module=None,\n):\n    if params_dict is None:\n        params_dict = {}\n\n    # Inline everything\n    _C._jit_pass_inline(graph)\n\n    # Remove fork/wait nodes\n    _C._jit_pass_inline_fork_wait(graph)\n    _C._jit_pass_lint(graph)\n    if GLOBALS.autograd_inlining:\n        _C._jit_pass_onnx_autograd_function_process(graph)\n    _C._jit_pass_lower_all_tuples(graph)\n\n    # we now record some ops like ones/zeros\n    # into a trace where we previously recorded constants.\n    # use constant prop to maintain our current level of onnx support\n    # without implementing symbolics for all of them\n    if _disable_torch_constant_prop is False:\n        _C._jit_pass_constant_propagation(graph)\n\n    _split_tensor_list_constants(graph, graph)\n    # run dce to eliminate dead parts of the graph that might have been\n    # left behind by things like symbolic_override\n    _C._jit_pass_dce(graph)\n    _C._jit_pass_lint(graph)\n\n    # CSE should improve perf when Autocast is used with disabled cache\n    # Autocast is disabled due to a limitation on tracer as described at https://github.com/pytorch/pytorch/issues/84092\n    # Must run before _C._jit_pass_erase_number_types to prevent type substitution\n    if _C._jit_pass_cse(graph):\n        _C._jit_pass_onnx_lint(graph)\n\n    _C._jit_pass_canonicalize_graph_fuser_ops(graph)\n    _C._jit_pass_lint(graph)\n    _C._jit_pass_peephole(graph, True)\n    _C._jit_pass_fuse_addmm(graph)\n    _C._jit_pass_lint(graph)\n\n    _C._jit_pass_peephole(graph, True)\n    _C._jit_pass_lower_all_tuples(graph)\n    # in _jit_pass_onnx, symbolic functions are called for each node for conversion.\n    # However, there are nodes that cannot be converted without additional context.\n    # For example, the number of outputs from split (and whether it is static or dynamic) is unknown\n    # until the point where it is unpacked by listUnpack node.\n    # This pass does a preprocess, and prepares the nodes such that enough context can be received\n    # by the symbolic function.\n    _C._jit_pass_onnx_remove_inplace_ops_for_onnx(graph, module)\n    _C._jit_pass_onnx_preprocess(graph)\n\n    # onnx does not support tuples, so try to remove them\n    _C._jit_pass_lint(graph)\n\n    # onnx only supports tensors, but 1 / 2 = 0.5 and tensor(1) / tensor(2) = 0\n    _C._jit_pass_prepare_division_for_onnx(graph)\n\n    _C._jit_pass_onnx_remove_print(graph)\n    _C._jit_pass_onnx_preprocess_caffe2(graph)\n\n    symbolic_helper._quantized_ops.clear()\n    # Unpack quantized weights for conv and linear ops and insert into graph.\n    _C._jit_pass_onnx_unpack_quantized_weights(\n        graph, params_dict, symbolic_helper.is_caffe2_aten_fallback()\n    )\n    if symbolic_helper.is_caffe2_aten_fallback():\n        # Insert permutes before and after each conv op to ensure correct order.\n        _C._jit_pass_onnx_quantization_insert_permutes(graph, params_dict)\n\n        # Find consecutive permutes that are no-ops and remove them.\n        _C._jit_pass_custom_pattern_based_rewrite_graph(\n            textwrap.dedent(\n                \"\"\"\\\n                graph(%Pi):\n                    %Pq = quantized::nhwc2nchw(%Pi)\n                    %Pr = quantized::nchw2nhwc(%Pq)\n                    return (%Pr)\"\"\"\n            ),\n            textwrap.dedent(\n                \"\"\"\\\n                graph(%Ri):\n                    return (%Ri)\"\"\"\n            ),\n            graph,\n        )\n\n    # onnx only supports tensors, so we turn all out number types into tensors\n    _C._jit_pass_erase_number_types(graph)\n    if GLOBALS.onnx_shape_inference:\n        input_names = [] if input_names is None else input_names\n        dynamic_axes = {} if dynamic_axes is None else dynamic_axes\n        _C._jit_pass_onnx_set_dynamic_input_shape(graph, dynamic_axes, input_names)\n    _C._jit_pass_onnx_lint(graph)\n\n    graph = _C._jit_pass_onnx(graph, operator_export_type)\n    _C._jit_pass_onnx_lint(graph)\n    _C._jit_pass_lint(graph)\n\n    _C._jit_pass_onnx_scalar_type_analysis(\n        graph, True, GLOBALS.export_onnx_opset_version\n    )\n    _C._jit_pass_lint(graph)\n\n    _C._jit_pass_onnx_peephole(\n        graph, GLOBALS.export_onnx_opset_version, fixed_batch_size\n    )\n    _C._jit_pass_lint(graph)\n\n    # graph is not a valid jit graph anymore because types have been replaced\n    # (e.g. int with Tensor), so it now contains operators that don't actually\n    # exist. We can't run normal dead code elimination because it'd fail trying\n    # to look up if an operator has side effects, but we can run a dead code\n    # elimination variant that doesn't need to look up if an op has side effects.\n    _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)\n    _C._jit_pass_lint(graph)\n    graph = _C._jit_pass_canonicalize(graph)\n    _C._jit_pass_lint(graph)\n    if GLOBALS.onnx_shape_inference:\n        try:\n            _C._jit_pass_onnx_graph_shape_type_inference(\n                graph, params_dict, GLOBALS.export_onnx_opset_version\n            )\n        except RuntimeError as exc:\n            if (\n                _C_onnx._CAFFE2_ATEN_FALLBACK\n                and exc.args[0]\n                == \"ScalarType UNKNOWN_SCALAR is an unexpected tensor scalar type!\"\n            ):\n                # Caffe2 builds can have UNKNOWN_SCALAR for some tensors\n                pass\n\n    return graph\n\n\n@_beartype.beartype\ndef warn_on_static_input_change(input_states):\n    \"\"\"Warns that changes to input dictionaries and strings won't take effect in the traced ONNX graph.\n\n    We accept dictionaries and strings as ONNX inputs, but they should be only for\n    configuration use. we detect here if these inputs are modified, and if so we warn\n    the user that the changes won't take effect in the traced ONNX graph.\n    \"\"\"\n    for input, traced_input in zip(input_states[0], input_states[1]):\n        if isinstance(input, dict):\n            if list(input.keys()) != list(traced_input.keys()):\n                warning = (\n                    \"We detected that you are modifying a dictionary that is an input to your \"\n                    \"model. \"\n                    \"Note that dictionaries are allowed as inputs in ONNX but they should be \"\n                    \"handled with care. \"\n                    \"Usages of dictionaries is not recommended, and should not be used except \"\n                    \"for configuration use. \"\n                    \"Also note that the order and values of the keys must remain the same. \"\n                )\n                warnings.warn(warning)\n        elif isinstance(input, str):\n            if input != traced_input:\n                warning = (\n                    \"The model seems to have string inputs/outputs. \"\n                    \"Note that strings will not appear as inputs/outputs of the ONNX graph. \"\n                )\n                warnings.warn(warning)\n\n\n@_beartype.beartype\ndef _resolve_args_by_export_type(arg_name, arg_value, operator_export_type):\n    \"\"\"Resolves the arguments that are ignored when export_type != operator_export_type.ONNX.\"\"\"\n    if (\n        operator_export_type is not operator_export_type.ONNX\n        and _C_onnx._CAFFE2_ATEN_FALLBACK\n    ):\n        if arg_value is True:\n            warnings.warn(\n                f\"'{arg_name}' can be set to True only when 'operator_export_type' is \"\n                \"`ONNX`. Since 'operator_export_type' is not set to 'ONNX', \"\n                f\"'{arg_name}' argument will be ignored.\"\n            )\n        arg_value = False\n    return arg_value\n\n\n@_beartype.beartype\ndef _decide_keep_init_as_input(\n    keep_initializers_as_inputs: Optional[bool],\n    operator_export_type: _C_onnx.OperatorExportTypes,\n    opset_version: int,\n):\n    \"\"\"Decides whether the initializers in the graph should be listed as ONNX graph inputs.\n\n    This method encapsulates the logic to decide whether the initializers in the graph\n    should be listed as ONNX graph inputs (i.e., whether to choose ONNX IR v3 or v4).\n    If keep_initializers_as_inputs is not specified (None), then we decide whether to keep\n    initializers as graph inputs (val_keep_init_as_ip) based on export type. If export type\n    is ONNX, then do not keep initializers as input (val_keep_init_as_ip=False). For all other\n    export types keep initializers as input (val_keep_init_as_ip=True).\n    If keep_initializers_as_inputs is specified, then respect it. Unless opset version <= 8,\n    in which case it must be ignored because for opset version <= 8, all initializers MUST be\n    part of graph input (only ONNX IR v3 is allowed), i.e. val_keep_init_as_ip=True.\n\n    Special handling is needed for opset version 8 or lower, because irrespective\n    of user input for keep_initializers_as_inputs, the graph must follow ONNX IR v3\n    semantics, i.e. all initializers must be listed as ONNX graph input.\n    \"\"\"\n\n    if opset_version < 9:\n        if keep_initializers_as_inputs is False:\n            warnings.warn(\n                \"Setting 'keep_initializers_as_inputs=False' for opset version\"\n                \"8 or lower would lead to an invalid ONNX graph. Therefore, \"\n                \"'keep_initializers_as_inputs=False' is ignored during export.\"\n                \"Exported model will have initializers as graph inputs (compliant \"\n                \" to ONNX IR v3).\"\n            )\n        return True  # i.e. True == initializers are part of graph input (ONNX IR v3)\n    val_keep_init_as_ip = (\n        True if keep_initializers_as_inputs is None else keep_initializers_as_inputs\n    )\n    if (\n        keep_initializers_as_inputs is None\n        and operator_export_type is _C_onnx.OperatorExportTypes.ONNX\n    ):\n        val_keep_init_as_ip = False\n    return val_keep_init_as_ip\n\n\n@_beartype.beartype\ndef _decide_add_node_names(add_node_names, operator_export_type):\n    return _resolve_args_by_export_type(\n        \"add_node_names\", add_node_names, operator_export_type\n    )\n\n\n@_beartype.beartype\ndef _decide_constant_folding(do_constant_folding, operator_export_type, training):\n    do_constant_folding = _resolve_args_by_export_type(\n        \"do_constant_folding\", do_constant_folding, operator_export_type\n    )\n    if do_constant_folding and (\n        training is not None and training is not _C_onnx.TrainingMode.EVAL\n    ):\n        warnings.warn(\n            \"It is recommended that constant folding be turned off ('do_constant_folding=False') \"\n            \"when exporting the model in training-amenable mode, i.e. with 'training=TrainingMode.TRAIN' \"\n            \"or 'training=TrainingMode.PRESERVE' (when model is in training mode). Otherwise, some \"\n            \"learnable model parameters may not translate correctly in the exported ONNX model \"\n            \"because constant folding mutates model parameters. Please consider \"\n            \"turning off constant folding or setting the training=TrainingMode.EVAL.\"\n        )\n    return do_constant_folding\n\n\n@_beartype.beartype\ndef _signature(model) -> inspect.Signature:\n    should_be_callable = getattr(model, \"forward\", model)\n    if callable(should_be_callable):\n        return inspect.signature(should_be_callable)\n    raise ValueError(\"model has no forward method and is not callable\")\n\n\n@_beartype.beartype\ndef _decide_input_format(model, args):\n    try:\n        sig = _signature(model)\n    except ValueError as e:\n        warnings.warn(f\"{e}, skipping _decide_input_format\")\n        return args\n    try:\n        ordered_list_keys = list(sig.parameters.keys())\n        if ordered_list_keys[0] == \"self\":\n            ordered_list_keys = ordered_list_keys[1:]\n        args_dict: Dict = {}\n        if isinstance(args, list):\n            args_list = args\n        elif isinstance(args, tuple):\n            args_list = list(args)\n        else:\n            args_list = [args]\n        if isinstance(args_list[-1], dict):\n            args_dict = args_list[-1]\n            args_list = args_list[:-1]\n        n_nonkeyword = len(args_list)\n        for optional_arg in ordered_list_keys[n_nonkeyword:]:\n            if optional_arg in args_dict:\n                args_list.append(args_dict[optional_arg])\n            # Check if this arg has a default value\n            else:\n                param = sig.parameters[optional_arg]\n                if param.default != param.empty:\n                    args_list.append(param.default)\n        args = args_list if isinstance(args, list) else tuple(args_list)\n    # Cases of models with no input args\n    except IndexError:\n        warnings.warn(\"No input args, skipping _decide_input_format\")\n    except Exception as e:\n        warnings.warn(f\"Skipping _decide_input_format\\n {e.args[0]}\")\n    return args\n\n\n@_beartype.beartype\ndef _trace(func, args, operator_export_type, return_outs=False):\n    # Special case for common case of passing a single Tensor\n    if isinstance(args, torch.Tensor):\n        args = (args,)\n\n    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(\n        func,\n        args,\n        strict=False,\n        _force_outplace=False,\n        _return_inputs_states=True,\n    )\n    warn_on_static_input_change(inputs_states)\n\n    trace_graph = _optimize_graph(trace_graph, operator_export_type, params_dict={})\n    if return_outs:\n        return trace_graph, torch_out\n    return trace_graph\n\n\n@_beartype.beartype\ndef _trace_and_get_graph_from_model(model, args):\n    # A basic sanity check: make sure the state_dict keys are the same\n    # before and after running the model.  Fail fast!\n    orig_state_dict_keys = torch.jit._unique_state_dict(model).keys()\n\n    # Disable Autocast cache because it replaces kernel's weight and bias\n    # by (undesired) constants.\n    # No perf impact for when there are reused weights since https://github.com/pytorch/pytorch/pull/85665\n    prev_autocast_cache_enabled = torch.is_autocast_cache_enabled()\n    torch.set_autocast_cache_enabled(False)\n    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(\n        model,\n        args,\n        strict=False,\n        _force_outplace=False,\n        _return_inputs_states=True,\n    )\n    torch.set_autocast_cache_enabled(prev_autocast_cache_enabled)\n\n    warn_on_static_input_change(inputs_states)\n\n    if orig_state_dict_keys != torch.jit._unique_state_dict(model).keys():\n        raise RuntimeError(\n            \"state_dict changed after running the tracer; \"\n            \"something weird is happening in your model!\"\n        )\n\n    return trace_graph, torch_out\n\n\n@_beartype.beartype\ndef _get_param_count_list(method_graph, args_params):\n    param_count_list = []\n    for input_, arg_params_ in zip(method_graph.inputs(), args_params):\n        if \"PackedParams\" in str(input_.type()):\n            in_vars, _ = torch.jit._flatten(arg_params_)\n            param_count_list.append(len(in_vars))\n        else:\n            param_count_list.append(arg_params_ is not None)\n\n    return param_count_list\n\n\n@_beartype.beartype\ndef _check_flatten_did_not_remove(original, jit_flattened):\n    \"\"\"torch.jit._flatten removes None. Check if it did so in this case.\"\"\"\n\n    @_beartype.beartype\n    def flatten(x):\n        if isinstance(x, (list, tuple)):\n            for inner in x:\n                yield from flatten(inner)\n        elif isinstance(x, dict):\n            for inner in x.values():\n                yield from flatten(inner)\n        else:\n            yield x\n\n    flattened_with_none = list(flatten(original))\n    num_none = len(flattened_with_none) - len(jit_flattened)\n    assert num_none >= 0\n    if num_none:\n        raise ValueError(\n            f\"args contained {num_none} None's after flattening. \"\n            \"When exporting a ScriptModule or ScriptFunction, no args may \"\n            \"be None because that breaks type propagation.\"\n        )\n\n\ndef _create_jit_graph(\n    model: Union[torch.nn.Module, torch.jit.ScriptFunction], args: Sequence[Any]\n) -> Tuple[_C.Graph, List[_C.IValue], Optional[Any], Optional[_C.ScriptModule]]:\n    if isinstance(model, (torch.jit.ScriptFunction, torch.jit.ScriptModule)):\n        flattened_args = tuple(torch.jit._flatten(tuple(args))[0])\n        _check_flatten_did_not_remove(args, flattened_args)\n        torch_out = None\n\n        if isinstance(model, torch.jit.ScriptModule):\n            try:\n                graph = model.forward.graph  # type: ignore[attr-defined]\n            except AttributeError as e:\n                raise RuntimeError(\"'forward' method must be a script method\") from e\n            _C._jit_pass_onnx_function_substitution(graph)\n            freezed_module = _C._freeze_module(\n                cast(_C.ScriptModule, model._c), preserveParameters=True\n            )\n            module, params = _C._jit_onnx_list_model_parameters(freezed_module)\n            method_graph = module._get_method(\"forward\").graph\n            args_params = tuple(args) + tuple(params)\n            param_count_list = _get_param_count_list(method_graph, args_params)\n            in_vars, _ = torch.jit._flatten(args_params)\n            graph = _C._propagate_and_assign_input_shapes(\n                method_graph, tuple(in_vars), param_count_list, False, False\n            )\n            return graph, params, torch_out, module\n\n        # torch.jit.ScriptFunction\n        params = []\n        graph = model.graph\n        _C._jit_pass_onnx_function_substitution(graph)\n        param_count_list = _get_param_count_list(graph, args)\n        graph = _C._propagate_and_assign_input_shapes(\n            graph, flattened_args, param_count_list, False, False\n        )\n        return graph, params, torch_out, None\n\n    graph, torch_out = _trace_and_get_graph_from_model(model, args)\n    _C._jit_pass_onnx_lint(graph)\n    state_dict = torch.jit._unique_state_dict(model)\n    params = list(state_dict.values())\n    graph_inputs = list(graph.inputs())\n    user_input_num = len(graph_inputs) - len(state_dict)\n    param_names = list(state_dict.keys())\n    for i, inp in enumerate(graph_inputs):\n        if i >= user_input_num:\n            inp.setDebugName(param_names[i - user_input_num])\n    _C._jit_pass_onnx_function_substitution(graph)\n    return graph, params, torch_out, None\n\n\n@_beartype.beartype\ndef _get_named_param_dict(graph, params):\n    input_and_param_names = [val.debugName() for val in graph.inputs()]\n    param_names = input_and_param_names[len(input_and_param_names) - len(params) :]\n    _params_dict = dict(zip(param_names, params))\n    return _params_dict\n\n\n@_beartype.beartype\ndef _get_example_outputs(model, args):\n    input_args = copy.deepcopy(args)\n    input_kwargs = {}\n    if input_args and isinstance(input_args[-1], dict):\n        input_kwargs = input_args[-1]\n        input_args = input_args[:-1]\n\n    example_outputs = model(*input_args, **input_kwargs)\n    if isinstance(example_outputs, list):\n        example_outputs = [example_outputs]\n    elif not isinstance(example_outputs, tuple):\n        example_outputs = (example_outputs,)\n\n    return example_outputs\n\n\n_qtype_vtype_map = {\n    torch.quint8: torch.uint8,\n    torch.qint8: torch.int8,\n    torch.qint32: torch.int32,\n    torch.quint4x2: torch.int8,\n}\n\n\n@_beartype.beartype\ndef unpack_quantized_tensor(value, cast_onnx_accepted=True):\n    if isinstance(value, torch.Tensor) and value.dtype in _qtype_vtype_map:\n        q_value_dequantize = value.dequantize()\n        q_scale = (\n            torch.tensor(value.q_scale(), dtype=torch.double)\n            if cast_onnx_accepted\n            else torch.tensor(value.q_scale(), dtype=torch.float32)\n        )\n        q_zero_point = (\n            torch.tensor(value.q_zero_point(), dtype=torch.int64)\n            if cast_onnx_accepted\n            else torch.tensor(value.q_zero_point(), dtype=_qtype_vtype_map[value.dtype])\n        )\n        q_value = q_value_dequantize / q_scale + q_zero_point\n        q_value = q_value.to(dtype=_qtype_vtype_map[value.dtype])\n        return q_value, q_scale, q_zero_point\n    else:\n        return (value,)\n\n\n@_beartype.beartype\ndef _pre_trace_quant_model(model, args):\n    r\"\"\"Returns `torch.jit.trace(model, args)` if model is quantized. Otherwise do nothing and return\n    original model.\n\n    This is due to https://github.com/pytorch/pytorch/issues/75761.\n    \"\"\"\n    if any(\n        hasattr(m, \"_packed_params\") for m in getattr(model, \"modules\", list)()\n    ) or any(getattr(arg, \"is_quantized\", False) for arg in args):\n        return torch.jit.trace(model, args)\n    return model\n\n\n@_beartype.beartype\ndef _model_to_graph(\n    model,\n    args,\n    verbose=False,\n    input_names=None,\n    output_names=None,\n    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,\n    do_constant_folding=True,\n    _disable_torch_constant_prop=False,\n    fixed_batch_size=False,\n    training=_C_onnx.TrainingMode.EVAL,\n    dynamic_axes=None,\n) -> Tuple[\n    _C.Graph,\n    Dict[str, torch.Tensor],\n    Optional[\n        Union[\n            torch.Tensor,\n            Tuple[torch.Tensor, ...],\n            List[torch.Tensor],\n            Dict[str, torch.Tensor],\n            Any,  # Can be nested tuples etc.\n        ]\n    ],\n]:\n    \"\"\"Converts model into an ONNX graph.\n\n    Returns:\n        graph: A TorchScript IR Graph with ONNX nodes.\n        params_dict: Dict from input param name to param value.\n        torch_out: The output tensors resulting from the trace of ``model``.\n            If ``model`` is a :class:`torch.jit.ScriptModule` or :class:`torch.jit.ScriptFunction`,\n            this will be None, since we are not doing any tracing.\n    \"\"\"\n    # TODO: can we simplify this to always return a tuple of Tensor or None?\n\n    # Special case for common case of passing a single Tensor\n    if isinstance(args, (torch.Tensor, int, float, bool)):\n        args = (args,)\n\n    model = _pre_trace_quant_model(model, args)\n    graph, params, torch_out, module = _create_jit_graph(model, args)\n    params_dict = _get_named_param_dict(graph, params)\n\n    try:\n        graph = _optimize_graph(\n            graph,\n            operator_export_type,\n            _disable_torch_constant_prop=_disable_torch_constant_prop,\n            fixed_batch_size=fixed_batch_size,\n            params_dict=params_dict,\n            dynamic_axes=dynamic_axes,\n            input_names=input_names,\n            module=module,\n        )\n    except Exception as e:\n        torch.onnx.log(\"Torch IR graph at exception: \", graph)\n        raise\n\n    is_script = isinstance(model, (torch.jit.ScriptFunction, torch.jit.ScriptModule))\n    if is_script:\n        example_outputs = _get_example_outputs(model, args)\n        example_outputs_final = ()\n        for example_output in example_outputs:\n            example_outputs_final += unpack_quantized_tensor(example_output)\n        out_vars, desc = torch.jit._flatten(example_outputs_final)\n        _C._jit_pass_onnx_assign_output_shape(\n            graph,\n            out_vars,\n            desc,\n            GLOBALS.onnx_shape_inference,\n            is_script,\n            GLOBALS.export_onnx_opset_version,\n        )\n\n    # NB: ONNX requires complete information about output types, which might be\n    # erased by some optimizations, so we need to set it explicitly again.\n    else:\n        if not isinstance(torch_out, (list, tuple)):\n            output_wrapped = [torch_out]\n        else:\n            output_wrapped = torch_out  # type: ignore[assignment]\n\n        output_tensors, out_desc = torch.jit._flatten(tuple(output_wrapped))\n        # assign_output_shape pass is not compatible with quantized outputs.\n        # Quantized outputs are flattened to 3 values in ONNX, while packed as\n        # single value in PyTorch.\n        if not any(getattr(out, \"is_quantized\", False) for out in output_tensors):\n            _C._jit_pass_onnx_assign_output_shape(\n                graph,\n                output_tensors,\n                out_desc,\n                GLOBALS.onnx_shape_inference,\n                is_script,\n                GLOBALS.export_onnx_opset_version,\n            )\n\n    _set_input_and_output_names(graph, input_names, output_names)\n    params_dict = _get_named_param_dict(graph, params)\n\n    if (\n        do_constant_folding\n        and GLOBALS.export_onnx_opset_version\n        >= _constants.ONNX_CONSTANT_FOLDING_MIN_OPSET\n    ):\n        if training is None or training == _C_onnx.TrainingMode.EVAL:\n            params_dict = _C._jit_pass_onnx_eval_peephole(graph, params_dict)\n\n        params_dict = _C._jit_pass_onnx_constant_fold(\n            graph, params_dict, GLOBALS.export_onnx_opset_version\n        )\n        _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)\n\n    if GLOBALS.onnx_shape_inference:\n        try:\n            _C._jit_pass_onnx_graph_shape_type_inference(\n                graph, params_dict, GLOBALS.export_onnx_opset_version\n            )\n        except RuntimeError as exc:\n            if (\n                _C_onnx._CAFFE2_ATEN_FALLBACK\n                and exc.args[0]\n                == \"ScalarType UNKNOWN_SCALAR is an unexpected tensor scalar type!\"\n            ):\n                # Caffe2 builds can have UNKNOWN_SCALAR for some tensors\n                pass\n\n    params_dict = _C._jit_pass_onnx_eliminate_unused_items(graph, params_dict)\n\n    # For ONNX opset < 9, constants only have three data types: float16, float, double.\n    # In this pass transform constants of other data types to float/double + cast operator.\n    if GLOBALS.export_onnx_opset_version < 9:\n        _C._jit_pass_onnx_cast_all_constant_to_floating(graph)\n\n    params_dict = _C._jit_pass_filter_non_tensor_arguments(params_dict)\n    _C._jit_decay_packed_param_input_types(graph)\n\n    # If output names lack a proper name and are identified only by their unique\n    # give them a legible name for debugging purposes\n    _apply_friendly_debug_names(graph, params_dict)\n\n    return graph, params_dict, torch_out\n\n\n@_beartype.beartype\n@torch._disable_dynamo\ndef export_to_pretty_string(\n    model,\n    args,\n    export_params=True,\n    verbose=False,\n    training=_C_onnx.TrainingMode.EVAL,\n    input_names=None,\n    output_names=None,\n    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,\n    export_type=None,\n    google_printer=False,\n    opset_version=None,\n    keep_initializers_as_inputs=None,\n    custom_opsets=None,\n    add_node_names=True,\n    do_constant_folding=True,\n    dynamic_axes=None,\n):\n    r\"\"\"\n    Similar to :func:`export`, but returns a text representation of the ONNX\n    model. Only differences in args listed below. All other args are the same\n    as :func:`export`.\n\n    Args:\n        add_node_names (bool, default True): Whether or not to set\n            NodeProto.name. This makes no difference unless\n            ``google_printer=True``.\n        google_printer (bool, default False): If False, will return a custom,\n            compact representation of the model. If True will return the\n            protobuf's `Message::DebugString()`, which is more verbose.\n\n    Returns:\n        A UTF-8 str containing a human-readable representation of the ONNX model.\n    \"\"\"\n    if opset_version is None:\n        opset_version = _constants.ONNX_DEFAULT_OPSET\n    if custom_opsets is None:\n        custom_opsets = {}\n    GLOBALS.export_onnx_opset_version = opset_version\n    GLOBALS.operator_export_type = operator_export_type\n\n    with exporter_context(model, training, verbose):\n        val_keep_init_as_ip = _decide_keep_init_as_input(\n            keep_initializers_as_inputs, operator_export_type, opset_version\n        )\n        val_add_node_names = _decide_add_node_names(\n            add_node_names, operator_export_type\n        )\n        val_do_constant_folding = _decide_constant_folding(\n            do_constant_folding, operator_export_type, training\n        )\n        args = _decide_input_format(model, args)\n        graph, params_dict, torch_out = _model_to_graph(\n            model,\n            args,\n            verbose,\n            input_names,\n            output_names,\n            operator_export_type,\n            val_do_constant_folding,\n            training=training,\n            dynamic_axes=dynamic_axes,\n        )\n\n        return graph._pretty_print_onnx(  # type: ignore[attr-defined]\n            params_dict,\n            opset_version,\n            False,\n            operator_export_type,\n            google_printer,\n            val_keep_init_as_ip,\n            custom_opsets,\n            val_add_node_names,\n        )\n\n\n@_beartype.beartype\ndef unconvertible_ops(\n    model,\n    args,\n    training: _C_onnx.TrainingMode = _C_onnx.TrainingMode.EVAL,\n    opset_version: Optional[int] = None,\n) -> Tuple[_C.Graph, List[str]]:\n    \"\"\"Returns an approximated list of all ops that are yet supported by :mod:`torch.onnx`.\n\n    The list is approximated because some ops may be removed during the conversion\n    process and don't need to be converted. Some other ops may have partial support\n    that will fail conversion with particular inputs. Please open a Github Issue\n    for op support requests.\n\n    Args:\n        model: Same as the `model` parameter in :func:`torch.onnx.export`.\n        args: Same as the `args` parameter in :func:`torch.onnx.export`.\n        training: Same as the `training` parameter in :func:`torch.onnx.export`.\n        opset_version: Same as the `opset_version` parameter in :func:`torch.onnx.export`.\n\n    Returns:\n        The JIT graph and a list of unconvertible ops in the format of \"domain::op\".\n    \"\"\"\n\n    opset_version = opset_version or _constants.ONNX_DEFAULT_OPSET\n    GLOBALS.export_onnx_opset_version = opset_version\n\n    try:\n        with exporter_context(model, training, verbose=False):\n            # Create a mostly clean JIT graph that contains the plain aten and\n            # other ops we can check with the symbolic registry.\n            # NOTE: We don't want to actually convert any ops to ONNX or run any\n            # symbolic functions because there is a higher chance that a pass\n            # fails or an unconvertible op messes up the graph during ONNX conversion.\n            # This way we can always generate a list just by looking at the names\n            # of the ops in the graph.\n            args = _decide_input_format(model, args)\n            model = _pre_trace_quant_model(model, args)\n            graph, _, _, module = _create_jit_graph(model, args)\n            _C._jit_pass_inline(graph)\n            _C._jit_pass_onnx_remove_inplace_ops_for_onnx(graph, module)\n            _C._jit_pass_erase_number_types(graph)\n            _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)\n    except Exception as e:\n        raise errors.OnnxExporterError(\n            \"Failed to discover unconvertible ops because of errors during the JIT graph \"\n            \"generation process.\"\n        ) from e\n\n    unsupported_ops = []\n    for node in graph.nodes():\n        domain_op = node.kind()\n        if domain_op.startswith((\"onnx::\", \"prim::\")):\n            # We consider onnx and prim ops as supported ops, even though some \"prim\"\n            # ops are not implemented as symbolic functions, because they may be\n            # eliminated in the conversion passes. Users may still see errors caused\n            # by prim ops even though they don't show up in the list.\n            continue\n        if not registration.registry.is_registered_op(\n            domain_op.rstrip(\"_\"), opset_version\n        ):\n            # We consider all registered ops supported, even though some of them are\n            # only partially supported, because there is not yet a good way to check\n            # if an op is fully supported.\n            # TODO(justinchuby): Create a way to check if an op is fully supported.\n            unsupported_ops.append(domain_op)\n    return graph, unsupported_ops\n\n\n@_beartype.beartype\ndef _setup_trace_module_map(\n    model: Union[torch.nn.Module, torch.jit.ScriptModule],\n    export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]],\n) -> Set[str]:\n    def __register_attribute_hook():\n        attr_name = \"_onnx_attrs\"\n\n        def _track_module_attributes_forward_pre_hook(module, input):\n            setattr(module, attr_name, _get_module_attributes(module))\n\n        def _track_module_attributes_forward_hook(module, input, output):\n            tracing_state = _C._get_tracing_state()\n            if not tracing_state:\n                return\n\n            graph = tracing_state.graph()\n            onnx_attrs = {}\n            if hasattr(module, attr_name):\n                onnx_attrs = getattr(module, attr_name)\n                delattr(module, attr_name)\n\n            _C._jit_pass_onnx_track_scope_attributes(graph, onnx_attrs)\n\n        for m in model.modules():\n            m.register_forward_hook(_track_module_attributes_forward_hook)\n            m.register_forward_pre_hook(_track_module_attributes_forward_pre_hook)\n\n    def _unqualified_variable_name(qualified_name: str) -> str:\n        \"\"\"\n        Parse qualified variable name and return the unqualified version.\n\n        Pure numeric atoms are considered inadequate, so this function will look past them,\n        and start from the first non-numeric atom.\n\n        Example:\n            >>> _unqualified_variable_name('__main__.Foo.bar')\n            'bar'\n            >>> _unqualified_variable_name('__main__.Foo.bar.0')\n            'bar.0'\n        \"\"\"\n        name_atoms = qualified_name.split(\".\")\n        for i, atom in reversed(list(enumerate(name_atoms))):\n            if not atom.isnumeric():\n                return \".\".join(name_atoms[i:])\n        return qualified_name\n\n    trace_module_map = {\n        _m: torch._C._jit_onnx_create_full_scope_name(\n            torch.typename(type(_m)), _unqualified_variable_name(_n)\n        )\n        for _n, _m in model.named_modules()\n    }\n    torch.jit._trace._trace_module_map = trace_module_map\n    if isinstance(export_modules_as_functions, bool) and export_modules_as_functions:\n        module_typenames = {torch.typename(type(module)) for module in trace_module_map}\n    elif isinstance(export_modules_as_functions, set) and export_modules_as_functions:\n\n        def _find_typename(v):\n            if isinstance(v, type):\n                return torch.typename(v)\n            else:\n                raise RuntimeError(\n                    \"Only type of the `nn.Module` should be \"\n                    \"passed in the set for argument `export_modules_as_functions`. \"\n                    f\"Got `{type(v).__name__}`.\"\n                )\n\n        module_typenames = {_find_typename(v) for v in export_modules_as_functions}\n    else:\n        module_typenames = set()\n\n    if module_typenames:\n        __register_attribute_hook()\n\n    return module_typenames\n\n\n@_beartype.beartype\ndef _reset_trace_module_map():\n    torch.jit._trace._trace_module_map = None\n    _C._jit_pass_onnx_clear_scope_records()\n\n\n@_beartype.beartype\ndef _get_module_attributes(module):\n    annotations = typing.get_type_hints(type(module))\n    base_m_annotations = typing.get_type_hints(torch.nn.Module)\n    [annotations.pop(k, None) for k in base_m_annotations]\n    # Check whether module attributes can be accessed. Some classes\n    # define attributes but don't provide access to them in their\n    # constructor.\n    #\n    # For example, torch.nn.Embedding has the `freeze` variable and its\n    # type specified in the class but the attribute is not created in the\n    # constructor. In other words, there is no `self.freeze = <True | False>`\n    # in the constructor.\n    #\n    # Reference: https://github.com/pytorch/pytorch/blob/92de1d322223fb5584e384971b32c46b93bc2f4b/torch/nn/modules/sparse.py#L120\n    attrs = {}\n    for k in annotations:\n        try:\n            attrs[k] = getattr(module, k)\n        except AttributeError:\n            torch.onnx.log(f\"Skipping module attribute '{k}'\")\n            continue\n    return attrs\n\n\n@_beartype.beartype\ndef _export(\n    model,\n    args,\n    f,\n    export_params=True,\n    verbose=False,\n    training=_C_onnx.TrainingMode.EVAL,\n    input_names=None,\n    output_names=None,\n    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,\n    export_type=None,\n    opset_version=None,\n    do_constant_folding=True,\n    dynamic_axes=None,\n    keep_initializers_as_inputs=None,\n    fixed_batch_size=False,\n    custom_opsets=None,\n    add_node_names=True,\n    onnx_shape_inference=True,\n    export_modules_as_functions=False,\n    autograd_inlining=True,\n):\n    assert GLOBALS.in_onnx_export is False\n\n    if export_type is None:\n        export_type = _exporter_states.ExportTypes.PROTOBUF_FILE\n\n    # Discussed deprecation with Nikita Shulga and Sergii Dymchenko from Meta\n    if _C_onnx._CAFFE2_ATEN_FALLBACK:\n        warnings.warn(\n            \"Caffe2 ONNX exporter is deprecated in version 2.0 and will be \"\n            \"removed in 2.2. Please use PyTorch 2.1 or older for this capability.\",\n            category=FutureWarning,\n            stacklevel=2,\n        )\n\n    if isinstance(model, torch.nn.DataParallel):\n        raise ValueError(\n            \"torch.nn.DataParallel is not supported by ONNX \"\n            \"exporter, please use 'attribute' module to \"\n            \"unwrap model from torch.nn.DataParallel. Try \"\n            \"torch.onnx.export(model.module, ...)\"\n        )\n\n    GLOBALS.onnx_shape_inference = onnx_shape_inference\n\n    if opset_version is None:\n        opset_version = _constants.ONNX_DEFAULT_OPSET\n\n    # torch.onnx.export does not support opset versions >=18\n    if opset_version > _constants.ONNX_TORCHSCRIPT_EXPORTER_MAX_OPSET:\n        # We do not want to fail because we should still allow users to create\n        # custom symbolic functions for opset>17\n        warnings.warn(\n            f\"Exporting to ONNX opset version {opset_version} is not supported. \"\n            f\"by 'torch.onnx.export()'. \"\n            f\"The highest opset version supported is {_constants.ONNX_TORCHSCRIPT_EXPORTER_MAX_OPSET}. \"\n            f\"To use a newer opset version, consider 'torch.onnx.dynamo_export()'. \"\n            f\"Note that dynamo_export() is in preview. Please report errors with \"\n            f\"dynamo_export() as Github issues to https://github.com/pytorch/pytorch/issues.\",\n            category=errors.OnnxExporterWarning,\n        )\n\n    if export_modules_as_functions and opset_version < 15:\n        raise ValueError(\n            \"`export_modules_as_functions` is not supported for `opset_version` < 15.\"\n            \"This is because `opset_version` < 15 implies IR version < 8, which means \"\n            \"no local function support. \"\n        )\n    if not operator_export_type:\n        if _C_onnx._CAFFE2_ATEN_FALLBACK:\n            operator_export_type = _C_onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK\n        else:\n            operator_export_type = _C_onnx.OperatorExportTypes.ONNX\n\n    # By default, training=TrainingMode.EVAL,\n    # which is good because running a model in training mode could result in\n    # internal buffers getting updated, dropout getting applied, etc.\n    # If you really know what you're doing, you can turn\n    # training=TrainingMode.TRAINING or training=TrainingMode.PRESERVE,\n    # (to preserve whatever the original training mode was.)\n    GLOBALS.export_onnx_opset_version = opset_version\n    GLOBALS.operator_export_type = operator_export_type\n\n    try:\n        GLOBALS.in_onnx_export = True\n        _autograd_inlining_previous = GLOBALS.autograd_inlining\n        GLOBALS.autograd_inlining = autograd_inlining\n\n        module_typenames_to_export_as_functions: Set[str] = set()\n        if isinstance(model, (torch.nn.Module, torch.jit.ScriptModule)):\n            module_typenames_to_export_as_functions = _setup_trace_module_map(\n                model, export_modules_as_functions\n            )\n\n        with exporter_context(model, training, verbose):\n            val_keep_init_as_ip = _decide_keep_init_as_input(\n                keep_initializers_as_inputs,\n                operator_export_type,\n                opset_version,\n            )\n            val_add_node_names = _decide_add_node_names(\n                add_node_names, operator_export_type\n            )\n            val_do_constant_folding = _decide_constant_folding(\n                do_constant_folding, operator_export_type, training\n            )\n            # Normally f can be a file-like object, but for large models, the external data format requires a\n            # valid `model_file_location`. Code in export.cpp will enforce this.\n            if isinstance(f, str):\n                model_file_location = f\n            else:\n                model_file_location = \"\"\n            args = _decide_input_format(model, args)\n            if dynamic_axes is None:\n                dynamic_axes = {}\n            _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)\n\n            graph, params_dict, torch_out = _model_to_graph(\n                model,\n                args,\n                verbose,\n                input_names,\n                output_names,\n                operator_export_type,\n                val_do_constant_folding,\n                fixed_batch_size=fixed_batch_size,\n                training=training,\n                dynamic_axes=dynamic_axes,\n            )\n\n            # TODO: Don't allocate a in-memory string for the protobuf\n            defer_weight_export = (\n                export_type is not _exporter_states.ExportTypes.PROTOBUF_FILE\n            )\n            if custom_opsets is None:\n                custom_opsets = {}\n\n            _C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph)\n            node_attr_to_name = {}  # type: ignore[var-annotated]\n            if module_typenames_to_export_as_functions:\n                # NOTE: cannot call DCE after this pass. DCE will remove function definition nodes.\n                node_attr_to_name = _C._jit_pass_onnx_function_extraction(\n                    graph,\n                    module_typenames_to_export_as_functions,\n                    list(params_dict.keys()),\n                )\n\n            if keep_initializers_as_inputs is not True:\n                params_dict = _C._jit_pass_onnx_deduplicate_initializers(  # type: ignore[assignment]\n                    graph,\n                    params_dict,\n                    getattr(model, \"training\", False),  # type: ignore[arg-type]\n                )\n            _C._jit_pass_onnx_assign_scoped_names_for_node_and_value(graph)\n            if export_params:\n                (\n                    proto,\n                    export_map,\n                    val_use_external_data_format,\n                    node_names,\n                ) = graph._export_onnx(  # type: ignore[attr-defined]\n                    params_dict,\n                    opset_version,\n                    dynamic_axes,\n                    defer_weight_export,\n                    operator_export_type,\n                    not verbose,\n                    val_keep_init_as_ip,\n                    custom_opsets,\n                    val_add_node_names,\n                    model_file_location,\n                    node_attr_to_name,\n                )\n            else:\n                (\n                    proto,\n                    export_map,\n                    val_use_external_data_format,\n                    node_names,\n                ) = graph._export_onnx(  # type: ignore[attr-defined]\n                    {},\n                    opset_version,\n                    dynamic_axes,\n                    False,\n                    operator_export_type,\n                    not verbose,\n                    val_keep_init_as_ip,\n                    custom_opsets,\n                    val_add_node_names,\n                    model_file_location,\n                    node_attr_to_name,\n                )\n            # insert function_proto into model_proto.\n            proto = onnx_proto_utils._add_onnxscript_fn(\n                proto,\n                custom_opsets,\n            )\n            if verbose:\n                torch.onnx.log(\"Exported graph: \", graph)\n            onnx_proto_utils._export_file(proto, f, export_type, export_map)\n            # The ONNX checker only works for ONNX graph. So if the operator_export_type is not ONNX,\n            # we can skip this check.\n            # If large model format export is enabled, proto will only contain data location instead of\n            # raw data and _check_onnx_proto() will fail because it can only handle the raw ONNX proto\n            # string in memory.\n            if (operator_export_type is _C_onnx.OperatorExportTypes.ONNX) and (\n                not val_use_external_data_format\n            ):\n                try:\n                    _C._check_onnx_proto(proto)\n                except RuntimeError as e:\n                    raise errors.CheckerError(e) from e\n    finally:\n        assert GLOBALS.in_onnx_export\n        GLOBALS.in_onnx_export = False\n        GLOBALS.autograd_inlining = _autograd_inlining_previous\n        _reset_trace_module_map()\n\n    return torch_out, params_dict\n\n\n@_beartype.beartype\ndef _apply_friendly_debug_names(graph, params):\n    for n in graph.nodes():\n        for v in n.inputs():\n            old_name = v.debugName()\n            if old_name != str(v.unique()):\n                continue\n            new_name = f\"{n.kind()}_{v.unique()}\"\n            v.setDebugName(new_name)\n            if old_name in params:\n                params[new_name] = params.pop(old_name)\n\n\n@_beartype.beartype\ndef _set_input_and_output_names(graph, input_names, output_names):\n    @_beartype.beartype\n    def set_names(node_list, name_list, descriptor):\n        if name_list is None:\n            return\n        if len(name_list) > len(node_list):\n            raise RuntimeError(\n                \"number of %s names provided (%d) exceeded number of %ss (%d)\"\n                % (descriptor, len(name_list), descriptor, len(node_list))\n            )\n\n        # Mark if the output node DebugName is set before.\n        output_node_set = set()\n        for i, (name, node) in enumerate(zip(name_list, node_list)):\n            # Duplicated output node, insert onnx::Identity to avoid setting the same DebugName after setDebugName().\n            if descriptor == \"output\":\n                if node in output_node_set:\n                    identity_node = graph.create(\"onnx::Identity\")\n                    identity_node.insertAfter(node.node())\n                    identity_node.addInput(node)\n                    identity_node.output().setType(node.type())\n                    graph.return_node().replaceInput(i, identity_node.output())\n                    node = identity_node.output()\n                output_node_set.add(node)\n\n            if node.debugName() != name:\n                node.setDebugName(name)\n\n    set_names(list(graph.inputs()), input_names, \"input\")\n    set_names(list(graph.outputs()), output_names, \"output\")\n\n\n@_beartype.beartype\ndef _run_symbolic_method(g, op_name, symbolic_fn, args):\n    r\"\"\"\n    This trampoline function gets invoked for every symbolic method\n    call from C++.\n    \"\"\"\n    try:\n        graph_context = jit_utils.GraphContext(\n            graph=g,\n            block=g.block(),\n            opset=GLOBALS.export_onnx_opset_version,\n            original_node=None,  # type: ignore[arg-type]\n            params_dict=_params_dict,\n            env={},\n            values_in_env=set(),\n            new_nodes=[],\n        )\n        return symbolic_fn(graph_context, *args)\n    except TypeError as e:\n        # Handle the specific case where we didn't successfully dispatch\n        # to symbolic_fn.  Otherwise, the backtrace will have the clues\n        # you need.\n        e.args = (f\"{e.args[0]} (occurred when translating {op_name})\",)\n        raise\n\n\n@_beartype.beartype\ndef _add_block(node: _C.Node) -> _C.Block:\n    return node.addBlock()\n\n\n@_beartype.beartype\ndef _add_input_to_block(block: _C.Block):\n    return block.addInputToBlock()  # type: ignore[attr-defined]\n\n\n@_beartype.beartype\ndef _add_output_to_block(block: _C.Block, value: _C.Value) -> int:\n    return block.registerOutput(value)\n\n\n@_beartype.beartype\ndef _should_aten_fallback(\n    name: str, opset_version: int, operator_export_type: _C_onnx.OperatorExportTypes\n):\n    # For all builds, if domain==\"aten\" and operator_export_type==ONNX_ATEN,\n    #   an aten::ATen operator is created regardless of symbolics existence\n\n    is_exportable_aten_op = registration.registry.is_registered_op(name, opset_version)\n    is_onnx_aten_export = operator_export_type == _C_onnx.OperatorExportTypes.ONNX_ATEN\n    is_aten_fallback_export = (\n        operator_export_type == _C_onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK\n    )\n    is_caffe2_build = _C_onnx._CAFFE2_ATEN_FALLBACK\n\n    if not name.startswith(\"aten::\"):\n        return False\n\n    if is_caffe2_build:\n        if (\n            is_onnx_aten_export or is_aten_fallback_export\n        ) and not is_exportable_aten_op:\n            return True\n    else:\n        if is_onnx_aten_export or (\n            is_aten_fallback_export and not is_exportable_aten_op\n        ):\n            return True\n\n    return False\n\n\n@_beartype.beartype\ndef _need_symbolic_context(symbolic_fn: Callable) -> bool:\n    \"\"\"Checks if the first argument to symbolic_fn is annotated as type `torch.onnx.SymbolicContext`.\"\"\"\n    params = tuple(inspect.signature(symbolic_fn).parameters.values())\n    # When the annotation is postpone-evaluated, the annotation is a string\n    # and not a type. We need to use get_type_hints to get the real type.\n    if not params:\n        return False\n    first_param_name = params[0].name\n    type_hints = typing.get_type_hints(symbolic_fn)\n    if first_param_name not in type_hints:\n        return False\n    param_type = type_hints[first_param_name]\n    return issubclass(param_type, _exporter_states.SymbolicContext)\n\n\n@_beartype.beartype\ndef _symbolic_context_handler(symbolic_fn: Callable) -> Callable:\n    \"\"\"Decorator that provides the symbolic context to the symbolic function if needed.\"\"\"\n    if _need_symbolic_context(symbolic_fn):\n        # TODO(justinchuby): Update the module name of GraphContext when it is public\n        warnings.warn(\n            \"The first argument to symbolic functions is deprecated in 1.13 and will be \"\n            \"removed in the future. Please annotate treat the first argument (g) as GraphContext \"\n            \"and use context information from the object instead.\",\n            category=FutureWarning,\n        )\n\n        def wrapper(graph_context: jit_utils.GraphContext, *args, **kwargs):\n            symbolic_context = _exporter_states.SymbolicContext(\n                params_dict=graph_context.params_dict,\n                env=graph_context.env,\n                cur_node=graph_context.original_node,\n                onnx_block=graph_context.block,\n            )\n            return symbolic_fn(symbolic_context, graph_context, *args, **kwargs)\n\n        return wrapper\n    return symbolic_fn\n\n\n@_beartype.beartype\ndef _get_aten_op_overload_name(n: _C.Node) -> str:\n    # Returns `overload_name` attribute to ATen ops on non-Caffe2 builds\n    schema = n.schema()\n    if not schema.startswith(\"aten::\") or symbolic_helper.is_caffe2_aten_fallback():\n        return \"\"\n    return _C.parse_schema(schema).overload_name\n\n\n@_beartype.beartype\ndef _run_symbolic_function(\n    graph: _C.Graph,\n    block: _C.Block,\n    node: _C.Node,\n    inputs: Any,\n    env: Dict[_C.Value, _C.Value],\n    values_in_env: Set[_C.Value],\n    new_nodes: List[_C.Node],\n    operator_export_type=_C_onnx.OperatorExportTypes.ONNX,\n) -> Optional[Union[_C.Value, Sequence[Optional[_C.Value]]]]:\n    \"\"\"Runs a symbolic function.\n\n    The function is used in C++ to export the node to ONNX.\n\n    Returns:\n        A single or a tuple of Values.\n        None when the node gets cloned as is into the new graph.\n    \"\"\"\n\n    opset_version = GLOBALS.export_onnx_opset_version\n\n    # See Note [Export inplace]\n    node_kind = node.kind()\n    if node_kind.endswith(\"_\"):\n        # Treat relu_ -> relu; add_ -> add etc.\n        ns_op_name = node_kind[:-1]\n    else:\n        ns_op_name = node_kind\n\n    namespace, op_name = jit_utils.parse_node_kind(ns_op_name)\n\n    graph_context = jit_utils.GraphContext(\n        graph=graph,\n        block=block,\n        opset=opset_version,\n        original_node=node,\n        params_dict=_params_dict,\n        env=env,\n        values_in_env=values_in_env,\n        new_nodes=new_nodes,\n    )\n\n    # Direct ATen export requested\n    if _should_aten_fallback(ns_op_name, opset_version, operator_export_type):\n        attrs = {\n            k + \"_\" + node.kindOf(k)[0]: symbolic_helper._node_get(node, k)\n            for k in node.attributeNames()\n        }\n        outputs = node.outputsSize()\n        attrs[\"outputs\"] = outputs\n        return graph_context.aten_op(\n            op_name,\n            *inputs,\n            overload_name=_get_aten_op_overload_name(node),\n            **attrs,\n        )\n\n    try:\n        # Caffe2-specific: Quantized op symbolics are registered for opset 9 only.\n        if symbolic_helper.is_caffe2_aten_fallback() and opset_version == 9:\n            symbolic_caffe2.register_quantized_ops(\"caffe2\", opset_version)\n\n        if namespace == \"quantized\" and symbolic_helper.is_caffe2_aten_fallback():\n            domain = \"caffe2\"\n        else:\n            domain = namespace\n        symbolic_function_name = f\"{domain}::{op_name}\"\n\n        symbolic_function_group = registration.registry.get_function_group(\n            symbolic_function_name\n        )\n        if symbolic_function_group is not None:\n            symbolic_fn = symbolic_function_group.get(opset_version)\n            if symbolic_fn is not None:\n                # TODO Wrap almost identical attrs assignment or comment the difference.\n                attrs = {\n                    k: symbolic_helper._node_get(node, k) for k in node.attributeNames()\n                }\n                return symbolic_fn(graph_context, *inputs, **attrs)\n\n        attrs = {\n            k + \"_\" + node.kindOf(k)[0]: symbolic_helper._node_get(node, k)\n            for k in node.attributeNames()\n        }\n        if namespace == \"onnx\":\n            # Clone node to trigger ONNX shape inference\n            return graph_context.op(\n                op_name, *inputs, **attrs, outputs=node.outputsSize()\n            )  # type: ignore[attr-defined]\n\n        raise errors.UnsupportedOperatorError(\n            symbolic_function_name,\n            opset_version,\n            symbolic_function_group.get_min_supported()\n            if symbolic_function_group\n            else None,\n        )\n\n    except RuntimeError:\n        if operator_export_type == _C_onnx.OperatorExportTypes.ONNX_FALLTHROUGH:\n            return None\n        elif (\n            operator_export_type == _C_onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK\n            and not symbolic_helper.is_caffe2_aten_fallback()\n        ):\n            # Emit ATen op for non-Caffe2 builds when `operator_export_type==ONNX_ATEN_FALLBACK`\n            attrs = {\n                k + \"_\" + node.kindOf(k)[0]: symbolic_helper._node_get(node, k)\n                for k in node.attributeNames()\n            }\n            return graph_context.aten_op(\n                op_name,\n                *inputs,\n                overload_name=_get_aten_op_overload_name(node),\n                **attrs,\n            )\n        raise\n    except TypeError as e:\n        # Handle the specific case where we didn't successfully dispatch.\n        # Otherwise, the backtrace will have the clues you need.\n        e.args = (f\"{e.args[0]} \\n(Occurred when translating {op_name}).\",)\n        raise\n\n\n@_beartype.beartype\ndef _verify_custom_op_name(symbolic_name: str):\n    if not re.match(r\"^[a-zA-Z0-9-_]+::[a-zA-Z-_]+[a-zA-Z0-9-_]*$\", symbolic_name):\n        raise errors.OnnxExporterError(\n            f\"Failed to register operator {symbolic_name}. \"\n            \"The symbolic name must match the format domain::name, \"\n            \"and should start with a letter and contain only \"\n            \"alphanumerical characters\"\n        )\n\n    ns, _ = jit_utils.parse_node_kind(symbolic_name)\n    if ns == \"onnx\":\n        raise ValueError(\n            f\"Failed to register operator {symbolic_name}. {ns} domain cannot be modified.\"\n        )\n\n\n@_beartype.beartype\ndef register_custom_op_symbolic(\n    symbolic_name: str,\n    symbolic_fn: Callable,\n    opset_version: int,\n):\n    \"\"\"Registers a symbolic function for a custom operator.\n\n    When the user registers symbolic for custom/contrib ops,\n    it is highly recommended to add shape inference for that operator via setType API,\n    otherwise the exported graph may have incorrect shape inference in some extreme cases.\n    An example of setType is `test_aten_embedding_2` in `test_operators.py`.\n\n    See \"Custom Operators\" in the module documentation for an example usage.\n\n    Args:\n        symbolic_name (str): The name of the custom operator in \"<domain>::<op>\"\n            format.\n        symbolic_fn (Callable): A function that takes in the ONNX graph and\n            the input arguments to the current operator, and returns new\n            operator nodes to add to the graph.\n        opset_version (int): The ONNX opset version in which to register.\n    \"\"\"\n    if symbolic_name.startswith(\"::\"):\n        symbolic_name = f\"aten{symbolic_name}\"\n\n    _verify_custom_op_name(symbolic_name)\n\n    registration.custom_onnx_symbolic(\n        symbolic_name,\n        opset_version,\n        decorate=[\n            _symbolic_context_handler,\n        ],\n    )(symbolic_fn)\n\n\n@_beartype.beartype\ndef unregister_custom_op_symbolic(symbolic_name: str, opset_version: int):\n    \"\"\"Unregisters ``symbolic_name``.\n\n    See \"Custom Operators\" in the module documentation for an example usage.\n\n    Args:\n        symbolic_name (str): The name of the custom operator in \"<domain>::<op>\"\n            format.\n        opset_version (int): The ONNX opset version in which to unregister.\n    \"\"\"\n    if symbolic_name.startswith(\"::\"):\n        symbolic_name = f\"aten{symbolic_name}\"\n\n    _verify_custom_op_name(symbolic_name)\n\n    registration.registry.unregister(symbolic_name, opset_version)\n\n\n@_beartype.beartype\ndef _validate_dynamic_axes(dynamic_axes, model, input_names, output_names):\n    \"\"\"Ensures dynamic axes argument is follows the expected format.\"\"\"\n    if len(dynamic_axes) == 0:\n        return\n\n    if hasattr(model, \"graph\"):\n        # Extracting set of valid input/output names that shall be used for dynamic_axes\n        if (input_names is None) or len(input_names) == 0:\n            input_names = [x.debugName() for x in model.graph.inputs()]\n        if (output_names is None) or len(output_names) == 0:\n            output_names = [y.debugName() for y in model.graph.outputs()]\n\n    valid_names = set((input_names or []) + (output_names or []))\n\n    # If dynamic axes are provided as a list rather than dictionary, they should\n    # first get converted to a dictionary in expected format. If desired axes names\n    # are not provided for dynamic axes, automatic names shall be generated for\n    # provided dynamic axes of specified input/output\n    for key, value in dynamic_axes.items():\n        if key not in valid_names:\n            warnings.warn(\n                f\"Provided key {key} for dynamic axes is not a valid input/output name\"\n            )\n        if isinstance(value, list):\n            warnings.warn(\n                \"No names were found for specified dynamic axes of provided input.\"\n                f\"Automatically generated names will be applied to each dynamic axes of input {key}\"\n            )\n\n            value_dict = {}\n            for i, x in enumerate(value):\n                if not isinstance(x, int):\n                    raise ValueError(\n                        \"The type of axis index is expected to be an integer\"\n                    )\n                if x in value_dict:\n                    warnings.warn(\n                        f\"Duplicate dynamic axis index {x} was provided for input {key}.\"\n                    )\n                else:\n                    value_dict[x] = str(key) + \"_dynamic_axes_\" + str(i + 1)\n            dynamic_axes[key] = value_dict\n\n\ndef model_signature(model: Union[torch.nn.Module, Callable]) -> inspect.Signature:\n    return inspect.signature(\n        model.forward if isinstance(model, torch.nn.Module) else model\n    )\n"
  },
  {
    "path": "module/sd_tensorrt.py",
    "content": "from .tensorrt_wrapper import CallableTensorRTEngineWrapper\n\n\nclass CallableTensorRTEngineWrapperDynamicShapeVAEDecode(CallableTensorRTEngineWrapper):\n    args_name = [\n        \"samples\",\n    ]\n\n    def gen_onnx_args(self, kwargs, module=None):\n        args_name = []\n        args = []\n        for arg_name in self.args_name:\n            args.append(kwargs.get(arg_name, None))\n            if args[-1] != None:\n                args_name.append(arg_name)\n        dynamic_axes = {\n            \"samples\": {2: \"H\", 3: \"W\"},\n        }\n        for k in list(dynamic_axes.keys()):\n            if not k in args_name:\n                dynamic_axes.pop(k)\n        return args, args_name, dynamic_axes\n\n    def gen_tensorrt_args(self, kwargs):\n        input_shape_info = {}\n        feed_dict = {}\n        for arg_name in self.args_name:\n            arg = kwargs.get(arg_name, None)\n            if arg != None:\n                feed_dict[arg_name] = arg\n                input_shape_info[arg_name] = tuple(arg.shape)\n\n        return feed_dict, input_shape_info\n\n    def gen_tensorrt_args_profile(self, input_shape_info):\n        min_input_profile_info = {\n            \"samples\": {2: 2, 3: 2},\n        }\n        input_profile_info = {}\n        for arg_name, shape_info in input_shape_info.items():\n            min_shape_config = min_input_profile_info.get(arg_name, None)\n            min_shape_info = list(shape_info)\n            if min_shape_config != None:\n                for k, v in min_shape_config.items():\n                    min_shape_info[k] = v\n            input_profile_info[arg_name] = [\n                tuple(min_shape_info),\n                shape_info,\n                shape_info,\n            ]\n\n        return input_profile_info\n"
  },
  {
    "path": "module/sfast_pipeline_compiler.py",
    "content": "import functools\nimport logging\nfrom dataclasses import dataclass\n\nimport torch\nfrom sfast.compilers.diffusion_pipeline_compiler import (\n    _enable_xformers,\n    _modify_model,\n)\nfrom sfast.cuda.graphs import make_dynamic_graphed_callable\nfrom sfast.jit import utils as jit_utils\nfrom sfast.jit.trace_helper import trace_with_kwargs\n\nfrom .comfy_trace.model_base import BaseModelApplyModelModuleFactory\n\nlogger = logging.getLogger()\n\n\n@dataclass\nclass TracedModuleCacheItem:\n    module: object\n    patch_id: int\n    device: str\n\n\nclass LazyTraceModule:\n    traced_modules = {}\n\n    def __init__(self, config=None, patch_id=None, **kwargs_) -> None:\n        self.config = config\n        self.patch_id = patch_id\n        self.kwargs_ = kwargs_\n        self.modify_model = functools.partial(\n            _modify_model,\n            enable_cnn_optimization=config.enable_cnn_optimization,\n            prefer_lowp_gemm=config.prefer_lowp_gemm,\n            enable_triton=config.enable_triton,\n            enable_triton_reshape=config.enable_triton,\n            memory_format=config.memory_format,\n        )\n        self.cuda_graph_modules = {}\n\n    def ts_compiler(\n        self,\n        m,\n    ):\n        with torch.jit.optimized_execution(True):\n            if self.config.enable_jit_freeze:\n                # raw freeze causes Tensor reference leak\n                # because the constant Tensors in the GraphFunction of\n                # the compilation unit are never freed.\n                m.eval()\n                m = jit_utils.better_freeze(m)\n            self.modify_model(m)\n\n        if self.config.enable_cuda_graph:\n            m = make_dynamic_graphed_callable(m)\n        return m\n\n    def __call__(self, model_function, /, **kwargs):\n        module_factory = BaseModelApplyModelModuleFactory(model_function, kwargs)\n        kwargs = module_factory.get_converted_kwargs()\n        key = module_factory.gen_cache_key()\n\n        traced_module = self.cuda_graph_modules.get(key)\n        if traced_module is None and not (\n            self.config.enable_cuda_graph or self.config.enable_jit_freeze\n        ):\n            traced_module_cache = self.traced_modules.get(key)\n            if not traced_module_cache is None:\n                if (\n                    traced_module_cache.patch_id != self.patch_id\n                    or traced_module_cache.device == \"meta\"\n                ):\n                    with module_factory.converted_module_context() as (\n                        m_model,\n                        m_kwargs,\n                    ):\n                        next(\n                            next(traced_module_cache.module.children()).children()\n                        ).load_state_dict(\n                            m_model.state_dict(), strict=False, assign=True\n                        )\n\n                    traced_module_cache.device = None\n                    traced_module_cache.patch_id = self.patch_id\n                traced_module = traced_module_cache.module\n\n        if traced_module is None:\n            with module_factory.converted_module_context() as (m_model, m_kwargs):\n                logger.info(\n                    f'Tracing {getattr(m_model, \"__name__\", m_model.__class__.__name__)}'\n                )\n                traced_m, call_helper = trace_with_kwargs(\n                    m_model, None, m_kwargs, **self.kwargs_\n                )\n\n            traced_m = self.ts_compiler(traced_m)\n            traced_module = call_helper(traced_m)\n            if self.config.enable_cuda_graph or self.config.enable_jit_freeze:\n                self.cuda_graph_modules[key] = traced_module\n            else:\n                self.traced_modules[key] = TracedModuleCacheItem(\n                    module=traced_module, patch_id=self.patch_id, device=None\n                )\n\n        return traced_module(**kwargs)\n\n    def to_empty(self):\n        for v in self.traced_modules.values():\n            v.module.to_empty(device=\"meta\")\n            v.device = \"meta\"\n\n\ndef build_lazy_trace_module(config, device, patch_id):\n    config.enable_cuda_graph = config.enable_cuda_graph and device.type == \"cuda\"\n\n    if config.enable_xformers:\n        _enable_xformers(None)\n\n    return LazyTraceModule(\n        config=config,\n        patch_id=patch_id,\n        check_trace=True,\n        strict=True,\n    )\n"
  },
  {
    "path": "module/tensorrt_utilities.py",
    "content": "#\n# Copyright 2022 The HuggingFace Inc. team.\n# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identifier: Apache-2.0\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport copy\nfrom collections import OrderedDict\nfrom logging import warning\n\nimport numpy as np\nimport tensorrt as trt\nimport torch\nimport zstandard\nfrom polygraphy import util\nfrom polygraphy.backend.trt import (\n    ModifyNetworkOutputs,\n    Profile,\n    bytes_from_engine,\n    engine_from_bytes,\n    engine_from_network,\n    network_from_onnx_bytes,\n    network_from_onnx_path,\n)\nfrom polygraphy.logger import G_LOGGER\nfrom torch.cuda import nvtx\nfrom tqdm import tqdm\n\nTRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)\nG_LOGGER.module_severity = G_LOGGER.VERBOSE\n\n# Map of numpy dtype -> torch dtype\nnumpy_to_torch_dtype_dict = {\n    np.uint8: torch.uint8,\n    np.int8: torch.int8,\n    np.int16: torch.int16,\n    np.int32: torch.int32,\n    np.int64: torch.int64,\n    np.float16: torch.float16,\n    np.float32: torch.float32,\n    np.float64: torch.float64,\n    np.complex64: torch.complex64,\n    np.complex128: torch.complex128,\n}\nif np.version.full_version >= \"1.24.0\":\n    numpy_to_torch_dtype_dict[np.bool_] = torch.bool\nelse:\n    numpy_to_torch_dtype_dict[np.bool] = torch.bool\n\n# Map of torch dtype -> numpy dtype\ntorch_to_numpy_dtype_dict = {\n    value: key for (key, value) in numpy_to_torch_dtype_dict.items()\n}\n\n\nclass TQDMProgressMonitor(trt.IProgressMonitor):\n    def __init__(self):\n        trt.IProgressMonitor.__init__(self)\n        self._active_phases = {}\n        self._step_result = True\n        self.max_indent = 5\n\n    def phase_start(self, phase_name, parent_phase, num_steps):\n        leave = False\n        try:\n            if parent_phase is not None:\n                nbIndents = (\n                    self._active_phases.get(parent_phase, {}).get(\n                        \"nbIndents\", self.max_indent\n                    )\n                    + 1\n                )\n                if nbIndents >= self.max_indent:\n                    return\n            else:\n                nbIndents = 0\n                leave = True\n            self._active_phases[phase_name] = {\n                \"tq\": tqdm(\n                    total=num_steps, desc=phase_name, leave=leave, position=nbIndents\n                ),\n                \"nbIndents\": nbIndents,\n                \"parent_phase\": parent_phase,\n            }\n        except KeyboardInterrupt:\n            # The phase_start callback cannot directly cancel the build, so request the cancellation from within step_complete.\n            _step_result = False\n\n    def phase_finish(self, phase_name):\n        try:\n            if phase_name in self._active_phases.keys():\n                self._active_phases[phase_name][\"tq\"].update(\n                    self._active_phases[phase_name][\"tq\"].total\n                    - self._active_phases[phase_name][\"tq\"].n\n                )\n\n                parent_phase = self._active_phases[phase_name].get(\"parent_phase\", None)\n                while parent_phase is not None:\n                    self._active_phases[parent_phase][\"tq\"].refresh()\n                    parent_phase = self._active_phases[parent_phase].get(\n                        \"parent_phase\", None\n                    )\n                if (\n                    self._active_phases[phase_name][\"parent_phase\"]\n                    in self._active_phases.keys()\n                ):\n                    self._active_phases[\n                        self._active_phases[phase_name][\"parent_phase\"]\n                    ][\"tq\"].refresh()\n                del self._active_phases[phase_name]\n            pass\n        except KeyboardInterrupt:\n            _step_result = False\n\n    def step_complete(self, phase_name, step):\n        try:\n            if phase_name in self._active_phases.keys():\n                self._active_phases[phase_name][\"tq\"].update(\n                    step - self._active_phases[phase_name][\"tq\"].n\n                )\n            return self._step_result\n        except KeyboardInterrupt:\n            # There is no need to propagate this exception to TensorRT. We can simply cancel the build.\n            return False\n\n\nclass Engine:\n    def __init__(self, engine_path, enable_cuda_graph=False):\n        self.engine_path = engine_path\n        self.engine = None\n        self.context = None\n        self.buffers = OrderedDict()\n        self.tensors = OrderedDict()\n        self.shared_device_memory = None\n\n        self.enable_cuda_graph = enable_cuda_graph\n        self.cuda_graph_instance = None  # cuda graph\n        self.inferred = False\n        self.cuda_graph_stream = None\n\n        self.refited_engine_byte = None\n\n        self.last_device_memory_size = 0\n\n    def __del__(self):\n        del self.engine\n        del self.context\n        del self.buffers\n        del self.tensors\n\n    def refit_simple(self, onnx_model):\n        print(f\"Refitting TensorRT engine with {onnx_model} weights\")\n\n        refitter = trt.Refitter(self.engine, TRT_LOGGER)\n        parser_refitter = trt.OnnxParserRefitter(refitter, TRT_LOGGER)\n        if type(onnx_model) is bytes:\n            result = parser_refitter.refit_from_bytes(onnx_model)\n        else:\n            result = parser_refitter.refit_from_file(onnx_model)\n\n        if not result or not refitter.refit_cuda_engine():\n            raise Exception(\"Failed to refit!\")\n\n    def refit_from_dict(\n        self,\n        refit_weights: dict[str, torch.Tensor],\n        constant_refit_weights: dict[str, torch.Tensor],\n    ):\n        # Initialize refitter\n        refitter = trt.Refitter(self.engine, TRT_LOGGER)\n\n        refitted_weights = set()\n        print(f\"[I] Total refittable weights {len(refitter.get_all_weights())}.\")\n\n        # iterate through all tensorrt refittable weights\n        for trt_weight_name in refitter.get_all_weights():\n            # get weight from state dict\n            if trt_weight_name in refit_weights:\n                refit_weight = refit_weights[trt_weight_name]\n            elif trt_weight_name in constant_refit_weights:\n                refit_weight = constant_refit_weights[trt_weight_name]\n                # print(refit_weight)\n            else:\n                continue\n\n            trt_datatype = refitter.get_weights_prototype(trt_weight_name).dtype\n            if trt_datatype == trt.DataType.FLOAT:\n                refit_weight = refit_weight.float()\n            elif trt_datatype == trt.DataType.HALF:\n                refit_weight = refit_weight.half()\n            else:\n                print(\"unhandled\", trt_datatype)\n                continue\n\n            # trt.Weight and trt.TensorLocation\n            trt_wt_tensor = trt.Weights(\n                trt_datatype,\n                refit_weight.data_ptr(),\n                torch.numel(refit_weight),\n            )\n            trt_wt_location = (\n                trt.TensorLocation.DEVICE\n                if refit_weight.is_cuda\n                else trt.TensorLocation.HOST\n            )\n\n            self.buffers[trt_weight_name] = refit_weight\n\n            # apply refit\n            assert refitter.set_named_weights(\n                trt_weight_name, trt_wt_tensor, trt_wt_location\n            )\n            refitted_weights.add(trt_weight_name)\n\n        # assert set(refitted_weights) == set(refit_weights.keys())\n        if not refitter.refit_cuda_engine():\n            raise Exception(\"Error: failed to refit new weights.\")\n\n        print(f\"[I] Total refitted weights {len(refitted_weights)}.\")\n\n    def build(\n        self,\n        onnx_model,\n        dtype,\n        input_profile=None,\n        enable_refit=False,\n        enable_weight_streaming=False,\n        enable_all_tactics=False,\n        timing_cache=None,\n        update_output_names=None,\n    ):\n        print(f\"Building TensorRT engine for : {self.engine_path}\")\n        config_kwargs = {}\n        if not enable_all_tactics:\n            config_kwargs[\"tactic_sources\"] = []\n\n        if type(onnx_model) is bytes:\n            network = network_from_onnx_bytes(\n                onnx_model,\n                flags=[\n                    trt.OnnxParserFlag.NATIVE_INSTANCENORM,\n                ],\n                strongly_typed=enable_weight_streaming,\n            )\n        else:\n            network = network_from_onnx_path(\n                onnx_model,\n                flags=[\n                    trt.OnnxParserFlag.NATIVE_INSTANCENORM,\n                ],\n                strongly_typed=enable_weight_streaming,\n            )\n        if update_output_names:\n            print(f\"Updating network outputs to {update_output_names}\")\n            network = ModifyNetworkOutputs(network, update_output_names)\n\n        input_names = set()\n        nd = network[1]\n        for i in range(nd.num_inputs):\n            input_names.add(nd.get_input(i).name)\n\n        p = [Profile()]\n        if input_profile:\n            p = [Profile() for i in range(len(input_profile))]\n            for _p, i_profile in zip(p, input_profile):\n                for name, dims in i_profile.items():\n                    if name not in input_names:\n                        continue\n                    assert len(dims) == 3\n                    _p.add(name, min=dims[0], opt=dims[1], max=dims[2])\n\n        builder = network[0]\n        config = builder.create_builder_config()\n        config.progress_monitor = TQDMProgressMonitor()\n\n        if not enable_weight_streaming:\n            if dtype == torch.float16:\n                config.set_flag(trt.BuilderFlag.FP16)\n            elif dtype == torch.bfloat16:\n                config.set_flag(trt.BuilderFlag.BF16)\n\n        if enable_refit:\n            config.set_flag(trt.BuilderFlag.STRIP_PLAN)\n            # Slower than REFIT_IDENTICAL\n            # config.set_flag(trt.BuilderFlag.REFIT)\n            config.set_flag(trt.BuilderFlag.REFIT_IDENTICAL)\n\n        if enable_weight_streaming:\n            config.set_flag(trt.BuilderFlag.WEIGHT_STREAMING)\n        # config.set_preview_feature(\n        #     trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805, False\n        # )\n        # config.set_tactic_sources(1 << int(trt.TacticSource.CUBLAS) | 1 << int(trt.TacticSource.CUBLAS_LT))\n\n        cache = None\n        try:\n            with util.LockFile(timing_cache):\n                timing_cache_data = util.load_file(\n                    timing_cache, description=\"tactic timing cache\"\n                )\n                cache = config.create_timing_cache(timing_cache_data)\n        except FileNotFoundError:\n            warning(\n                \"Timing cache file {} not found, falling back to empty timing cache.\".format(\n                    timing_cache\n                )\n            )\n        if cache is not None:\n            config.set_timing_cache(cache, ignore_mismatch=True)\n\n        profiles = copy.deepcopy(p)\n        for profile in profiles:\n            # Last profile is used for set_calibration_profile.\n            calib_profile = profile.fill_defaults(network[1]).to_trt(\n                builder, network[1]\n            )\n            config.add_optimization_profile(calib_profile)\n\n        try:\n            self.engine = engine_from_network(\n                network,\n                config,\n                save_timing_cache=timing_cache,\n            )\n        except Exception as e:\n            raise Exception(f\"Failed to build engine: {e}\")\n        self.update_binding_set()\n\n    def save_engine(self):\n        print(f\"Saveing TensorRT engine: {self.engine_path}\")\n        with zstandard.open(self.engine_path, \"wb\") as zwfp:\n            zwfp.write(bytes_from_engine(self.engine))\n\n    def load(self):\n        if self.refited_engine_byte is not None:\n            print(\"Loading TensorRT engine from byte cache.\")\n            self.engine = engine_from_bytes(self.refited_engine_byte)\n            self.refited_engine_byte = None\n        else:\n            print(f\"Loading TensorRT engine: {self.engine_path}\")\n            with zstandard.open(self.engine_path, \"rb\") as zrfp:\n                self.engine = engine_from_bytes(zrfp.read())\n        self.update_binding_set()\n\n    def update_binding_set(self):\n        self.binding_set = set()\n        for idx in range(self.engine.num_io_tensors):\n            self.binding_set.add(self.engine[idx])\n\n    def offload(self, offload_context_only=False):\n        if not offload_context_only and self.refited_engine_byte is None:\n            serialization_config = self.engine.create_serialization_config()\n            serialization_config.flags &= ~(\n                1 << int(trt.SerializationFlag.EXCLUDE_WEIGHTS)\n            )\n            self.refited_engine_byte = self.engine.serialize_with_config(\n                serialization_config\n            )\n            self.buffers.clear()\n\n        del self.context\n        self.context = None\n\n        if not offload_context_only:\n            del self.engine\n            self.engine = None\n\n        self.tensors = OrderedDict()\n        self.shared_device_memory = None\n\n        self.cuda_graph_instance = None\n        self.inferred = False\n        self.cuda_graph_stream = None\n\n    def is_weight_streaming_engine(self):\n        return self.engine.streamable_weights_size > 0\n\n    def activate(\n        self, reuse_device_memory=None, memory_limit_size=1000 * 1000 * 1000 * 3\n    ):\n        if self.context is None:\n            if self.is_weight_streaming_engine():\n\n                def update_budget_size():\n                    budget_size = memory_limit_size - self.engine.device_memory_size_v2\n                    if budget_size < 0:\n                        budget_size = 0\n                    self.engine.weight_streaming_budget_v2 = min(\n                        budget_size, self.engine.streamable_weights_size\n                    )\n\n                # if weight_streaming enable , device_memory_size_v2 will change.\n                update_budget_size()\n                update_budget_size()\n\n            if reuse_device_memory:\n                self.context = (\n                    self.engine.create_execution_context_without_device_memory()\n                )\n            #    self.context.device_memory = reuse_device_memory\n            else:\n                self.context = self.engine.create_execution_context()\n            assert self.context is not None\n\n    def get_device_memory_size(self):\n        if self.engine is not None:\n            if self.is_weight_streaming_engine():\n                self.last_device_memory_size = (\n                    self.engine.device_memory_size_v2\n                    + self.engine.weight_streaming_budget_v2\n                )\n            else:\n                self.last_device_memory_size = self.engine.device_memory_size_v2\n        return self.last_device_memory_size\n\n    def allocate_buffers(\n        self, shape_dict=None, device=\"cuda\", allocate_input_buffers=True\n    ):\n        nvtx.range_push(\"allocate_buffers\")\n        for idx in range(self.engine.num_io_tensors):\n            tensor_name = self.engine.get_tensor_name(idx)\n\n            if shape_dict and tensor_name in shape_dict:\n                shape = shape_dict[tensor_name].shape\n            else:\n                shape = self.context.get_tensor_shape(tensor_name)\n            shape = list(shape)\n            if (\n                tensor_name in self.tensors\n                and list(self.tensors[tensor_name].shape) == shape\n            ):\n                continue\n            dtype = trt.nptype(self.engine.get_tensor_dtype(tensor_name))\n            if self.engine.get_tensor_mode(tensor_name) == trt.TensorIOMode.INPUT:\n                self.context.set_input_shape(tensor_name, shape)\n                if not allocate_input_buffers or tensor_name not in shape_dict:\n                    continue\n            tensor = torch.empty(\n                tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype], device=device\n            )\n            self.tensors[tensor_name] = tensor\n        if self.shared_device_memory is None:\n            self.shared_device_memory = torch.empty(\n                self.engine.device_memory_size_v2, dtype=torch.uint8, device=device\n            )\n            self.context.set_device_memory(\n                self.shared_device_memory.data_ptr(), self.engine.device_memory_size_v2\n            )\n        nvtx.range_pop()\n\n    def release_buffers(self):\n        self.tensors = OrderedDict()\n\n    def infer(\n        self,\n        feed_dict,\n        stream: torch.cuda.Stream,\n        stream_sync=False,\n        free_shared_device_memory=True,\n    ):\n        nvtx.range_push(\"set_tensors\")\n        for name, buf in feed_dict.items():\n            if name in self.tensors:\n                self.tensors[name].copy_(buf)\n            elif name in self.binding_set:\n                dtype = trt.nptype(self.engine.get_tensor_dtype(name))\n                self.tensors[name] = buf.to(dtype=numpy_to_torch_dtype_dict[dtype])\n\n        for name, tensor in self.tensors.items():\n            self.context.set_tensor_address(name, tensor.data_ptr())\n        nvtx.range_pop()\n        nvtx.range_push(\"execute\")\n        if self.enable_cuda_graph and self.cuda_graph_instance is not None:\n            self.cuda_graph_instance.replay()\n        elif self.enable_cuda_graph and self.inferred:\n            # capture cuda graph\n            infer_graph = torch.cuda.CUDAGraph()\n            self.cuda_graph_stream = torch.cuda.Stream()\n\n            with torch.cuda.graph(infer_graph, stream=self.cuda_graph_stream):\n                noerror = self.context.execute_async_v3(\n                    self.cuda_graph_stream.cuda_stream\n                )\n\n            if not noerror:\n                raise ValueError(\"ERROR: inference failed.\")\n\n            self.cuda_graph_instance = infer_graph\n        else:\n            noerror = self.context.execute_async_v3(stream.cuda_stream)\n            if not noerror:\n                raise ValueError(\"ERROR: inference failed.\")\n            self.inferred = True\n        nvtx.range_pop()\n\n        if stream_sync:\n            stream.synchronize()\n\n        if not self.enable_cuda_graph and free_shared_device_memory:\n            del self.shared_device_memory\n            self.shared_device_memory = None\n\n        return self.tensors\n\n    def set_static_dict_input(self, feed_dict):\n        nvtx.range_push(\"set_tensors\")\n        for name, tensor in feed_dict.items():\n            dtype = trt.nptype(self.engine.get_tensor_dtype(name))\n            feed_dict[name] = tensor.to(dtype=numpy_to_torch_dtype_dict[dtype])\n            self.context.set_tensor_address(name, feed_dict[name].data_ptr())\n        nvtx.range_pop()\n\n    def __str__(self):\n        out = \"\"\n        for opt_profile in range(self.engine.num_optimization_profiles):\n            for binding_idx in range(self.engine.num_io_tensors):\n                name = self.engine.get_tensor_name(binding_idx)\n                shape = self.engine.get_tensor_profile_shape(opt_profile, name)\n                out += f\"\\t{name} = {shape}\\n\"\n        return out\n"
  },
  {
    "path": "module/tensorrt_wrapper.py",
    "content": "import gc\nimport hashlib\nimport json\nimport logging\nimport os\nimport tempfile\nimport time\nfrom dataclasses import dataclass, field\nfrom typing import Any, List\n\nimport comfy.cldm.cldm\nimport comfy.gligen\nimport comfy.ldm.modules.diffusionmodules.openaimodel\nimport comfy.model_management\nimport comfy.model_patcher\nimport numpy\nimport safetensors\nimport safetensors.torch\nimport tensorrt\nimport torch\nimport torch.version\nfrom torch.cuda import nvtx\n\nfrom .comfy_trace_utilities import hash_arg\nfrom .onnx_module_refit import (\n    make_constant_params_dict_by_onnx_model,\n    make_module_onnx_tensor_gen_map_by_params_dict,\n    make_params_dict_by_module,\n)\nfrom .tensorrt_utilities import Engine\n\n_logger = logging.getLogger(__name__)\n\n\n@dataclass\nclass TensorRTEngineConfig:\n    enable_cuda_graph: bool\n    keep_width: int = 768\n    keep_height: int = 768\n    keep_batch_size: int = 2\n    keep_embedding_block: int = 2\n    use_dedicated_engine: bool = False\n\n\nclass CallableTensorRTEngineWrapper:\n    def __init__(self, tensorrt_context, identification) -> None:\n        self.tensorrt_context: TensorRTEngineContext = tensorrt_context\n        self.identification = identification + self.__class__.__name__\n\n        self.engine: Engine = None\n        self.onnx_cache_dir = None\n        self.onnx_cache = None\n        self.onnx_refit_info = None\n\n        self.module_identification = None\n        self.input_shape_info = None\n        self.input_profile_info = None\n\n        self.engine_comfy_model_patcher_wrapper = None\n\n        self.engine_cache_map = {}\n\n    def gen_onnx_args(self, kwargs, module=None):\n        args = []\n        args_name = []\n        for arg_name, arg in kwargs.items():\n            args.append(arg)\n            if arg is not None:\n                args_name.append(arg_name)\n\n        return args, args_name, None\n\n    def gen_onnx_outputs(self, module):\n        return [\"output\"]\n\n    def gen_tensorrt_args(self, kwargs):\n        input_shape_info = {}\n        feed_dict = {}\n        for arg_name, arg in kwargs.items():\n            if arg is not None:\n                feed_dict[arg_name] = arg\n                input_shape_info[arg_name] = tuple(arg.shape)\n\n        return feed_dict, input_shape_info\n\n    def gen_tensorrt_args_profile(self, input_shape_info):\n        return {k: [v, v, v] for k, v in input_shape_info.items()}\n\n    def gen_tensorrt_outputs(self, output):\n        return output[\"output\"]\n\n    def is_profile_compatible(self, input_profile_info, input_shape_info):\n        if input_profile_info is None:\n            return False\n        if len(input_profile_info) != len(input_shape_info):\n            return False\n        for arg_name, shape in input_shape_info.items():\n            profile = input_profile_info.get(arg_name, None)\n            if profile is None:\n                return False\n            if len(profile[0]) != len(shape):\n                return False\n            for d, mind, maxd in zip(shape, profile[0], profile[2]):\n                if d < mind or d > maxd:\n                    return False\n        return True\n\n    def __call__(self, module: torch.nn.Module, /, **kwargs: Any) -> Any:\n        feed_dict, input_shape_info = self.gen_tensorrt_args(kwargs)\n\n        if self.engine is None or not self.is_profile_compatible(\n            self.input_profile_info, input_shape_info\n        ):\n            self.input_shape_info = input_shape_info\n            input_profile_info = self.gen_tensorrt_args_profile(input_shape_info)\n\n            if self.tensorrt_context.identify_weight_hash:\n                if self.module_identification is None:\n                    self.module_identification = sha256sum_state_dict(\n                        module.state_dict()\n                    )\n\n            engine_cache_key = (\n                hash_arg(torch.version.__version__),\n                hash_arg(tensorrt.__version__),\n                hash_arg(self.tensorrt_context.unet_config),\n                hash_arg(self.identification),\n                hash_arg(input_profile_info),\n                hash_arg(self.tensorrt_context.enable_weight_streaming),\n                hash_arg(str(self.tensorrt_context.model_sampling_type)),\n                hash_arg(str(self.module_identification)),\n            )\n\n            if engine_cache_key in self.engine_cache_map:\n                (\n                    self.engine,\n                    self.engine_comfy_model_patcher_wrapper,\n                ) = self.engine_cache_map[engine_cache_key]\n                self.input_profile_info = input_profile_info\n            else:\n                engine = get_engine_with_cache(engine_cache_key)\n\n                args, args_name, dynamic_axes = self.gen_onnx_args(\n                    kwargs, module=module\n                )\n\n                onnx_cache_key = (\n                    hash_arg(torch.version.__version__),\n                    hash_arg(self.tensorrt_context.unet_config),\n                    hash_arg(self.identification),\n                    hash_arg((args_name, dynamic_axes)),\n                    hash_arg(str(self.tensorrt_context.model_sampling_type)),\n                    hash_arg(str(self.module_identification)),\n                )\n                self.onnx_refit_info = get_refit_info_cache(onnx_cache_key)\n\n                if (\n                    (engine is None)\n                    or (self.onnx_refit_info is None)\n                    or (not self.tensorrt_context.enable_fast_refit)\n                ) and self.onnx_cache is None:\n                    module.to(device=self.tensorrt_context.cuda_device)\n                    self.onnx_cache_dir = tempfile.TemporaryDirectory(\n                        suffix=\"onnx_cache_dir\"\n                    )\n                    self.onnx_cache = os.path.join(\n                        self.onnx_cache_dir.name, \"onnx_cache.onnx\"\n                    )\n                    try:\n                        use_patched_export = False\n                        # only change is just make its export funtion return onnx params_dict\n                        if torch.version.__version__ == \"2.4.0\":\n                            from .patched_onnx_export.utils_2_4_0 import (\n                                export as patched_export,\n                            )\n\n                            use_patched_export = True\n                        if use_patched_export:\n                            torch_out, params_dict = patched_export(\n                                module,\n                                tuple(args),\n                                self.onnx_cache,\n                                export_params=True,\n                                verbose=False,\n                                do_constant_folding=False,\n                                input_names=args_name,\n                                output_names=self.gen_onnx_outputs(module),\n                                dynamic_axes=dynamic_axes,\n                                # dynamo=True\n                            )\n                            if self.tensorrt_context.enable_fast_refit:\n                                self.onnx_refit_info = gen_refit_info(onnx_cache_key)\n                                self.onnx_refit_info.tensor_gen_map = (\n                                    make_module_onnx_tensor_gen_map_by_params_dict(\n                                        module, params_dict\n                                    )\n                                )\n                                self.onnx_refit_info.constant_params_dict = (\n                                    make_constant_params_dict_by_onnx_model(\n                                        self.onnx_cache\n                                    )\n                                )\n                                self.onnx_refit_info.save()\n                            del params_dict\n                        else:\n                            torch.onnx.export(\n                                module,\n                                tuple(args),\n                                self.onnx_cache,\n                                export_params=True,\n                                verbose=False,\n                                do_constant_folding=False,\n                                input_names=args_name,\n                                output_names=self.gen_onnx_outputs(module),\n                                dynamic_axes=dynamic_axes,\n                            )\n                    except Exception as e:\n                        self.onnx_cache_dir.cleanup()\n                        self.onnx_cache_dir = None\n                        self.onnx_cache = None\n                        self.onnx_refit_info = None\n                        raise e\n\n                nvtx.range_push(\"offload origin model\")\n                module.to(device=\"cpu\")\n                gc.collect()\n                comfy.model_management.soft_empty_cache()\n                nvtx.range_pop()\n\n                additional_keep_models = []\n                additional_keep_models = get_additional_keep_models()\n\n                if engine is None:\n                    comfy.model_management.free_memory(\n                        6 * 1024 * 1024 * 1024,\n                        self.tensorrt_context.cuda_device,\n                    )\n                    comfy.model_management.soft_empty_cache()\n                    engine = gen_engine(\n                        engine_cache_key,\n                        self.onnx_cache,\n                        [input_profile_info],\n                        self.tensorrt_context.dtype,\n                        enable_weight_streaming=self.tensorrt_context.enable_weight_streaming,\n                    )\n                    engine.save_engine()\n\n                self.engine = engine\n                try:\n                    nvtx.range_push(\"load engine\")\n                    if self.engine.engine is None:\n                        self.engine.load()\n\n                    # reserve some memory for pytorch\n                    memory_limit_size = int(\n                        comfy.model_management.get_total_memory()\n                        - (1024 * 1024 * 1024 * 2)\n                    )\n\n                    self.engine.activate(\n                        True,\n                        min(\n                            self.tensorrt_context.lowvram_model_memory,\n                            memory_limit_size,\n                        ),\n                    )\n                    nvtx.range_push(\"refit engine\")\n                    if (\n                        self.tensorrt_context.enable_fast_refit\n                        and self.onnx_refit_info is not None\n                    ):\n                        _logger.info(\"using fast refit\")\n                        self.engine.refit_from_dict(\n                            make_params_dict_by_module(\n                                module, self.onnx_refit_info.tensor_gen_map\n                            ),\n                            self.onnx_refit_info.constant_params_dict,\n                        )\n                    else:\n                        self.engine.refit_simple(self.onnx_cache)\n                    nvtx.range_pop()\n                    self.engine_comfy_model_patcher_wrapper = (\n                        TensorRTEngineComfyModelPatcherWrapper(\n                            engine,\n                            load_device=self.tensorrt_context.cuda_device,\n                            offload_device=\"cpu\",\n                            size=self.engine.get_device_memory_size(),\n                        )\n                    )\n                    comfy.model_management.load_models_gpu(\n                        [\n                            *self.tensorrt_context.keep_models,\n                            self.engine_comfy_model_patcher_wrapper,\n                            *get_additional_keep_models(),\n                            *additional_keep_models,\n                        ],\n                        self.engine.get_device_memory_size(),\n                    )\n                    self.input_profile_info = input_profile_info\n                    self.engine_cache_map[engine_cache_key] = (\n                        self.engine,\n                        self.engine_comfy_model_patcher_wrapper,\n                    )\n                    nvtx.range_pop()\n                except Exception as e:\n                    self.engine = None\n                    gc.collect()\n                    raise e\n\n        if self.engine.context is None:\n            comfy.model_management.load_models_gpu(\n                [\n                    *self.tensorrt_context.keep_models,\n                    self.engine_comfy_model_patcher_wrapper,\n                    *get_additional_keep_models(),\n                ],\n                self.engine.get_device_memory_size(),\n            )\n\n        self.engine.allocate_buffers(\n            feed_dict,\n            device=self.tensorrt_context.cuda_device,\n            allocate_input_buffers=False,\n        )\n\n        output = self.engine.infer(\n            feed_dict,\n            self.tensorrt_context.cuda_stream,\n            self.tensorrt_context.infer_cuda_stream_sync,\n        )\n        output = self.gen_tensorrt_outputs(output)\n        self.engine.release_buffers()\n\n        return output\n\n\nclass TensorRTEngineComfyModelPatcherWrapper(comfy.model_patcher.ModelPatcher):\n    def patch_model_lowvram(self, device_to=None, *arg, **kwargs):\n        self.patch_model(device_to, patch_weights=False)\n\n    def patch_model(self, device_to=None, *arg, **kwargs):\n        if device_to is not None:\n            if self.model.engine is None:\n                self.model.load()\n            if self.model.context is None:\n                self.model.activate(True, self.model.last_device_memory_size)\n            self.current_device = device_to\n\n        return self.model\n\n    def unpatch_model(self, device_to=None, *arg, **kwargs):\n        if device_to is not None:\n            self.model.offload()\n            self.current_device = device_to\n\n\ndef get_additional_keep_models():\n    models = []\n    for model in comfy.model_management.current_loaded_models:\n        if isinstance(\n            model.real_model, (comfy.cldm.cldm.ControlNet, comfy.gligen.Gligen)\n        ):\n            models.append(model.model)\n    return models\n\n\n@dataclass\nclass TensorRTEngineContext:\n    cuda_device = None\n    shared_device_memory = None\n    cuda_stream = None\n    unet_config: dict = None\n    model_sampling_type = None\n    model_type: str = \"\"\n    keep_models: List = field(default_factory=lambda: [])\n    dtype: object = torch.float16\n    enable_weight_streaming: bool = False\n    enable_fast_refit: bool = True\n    infer_cuda_stream_sync: bool = False\n    identify_weight_hash: bool = False\n    lowvram_model_memory = 0\n\n\nTIMING_CACHE_PATH = os.path.join(\n    os.path.dirname(os.path.dirname(__file__)),\n    \"tensorrt_engine_cache\",\n    \"timing_cache.cache\",\n)\nif not os.path.exists(TIMING_CACHE_PATH):\n    os.makedirs(os.path.dirname(TIMING_CACHE_PATH), exist_ok=True)\n    with open(TIMING_CACHE_PATH, \"wb\") as f:\n        pass\n\n\ndef get_key_hash(key):\n    return hashlib.sha256(str(key).encode()).hexdigest()\n\n\ndef get_cache_path(key, dir_name):\n    cache_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), dir_name)\n    if not os.path.exists(cache_dir):\n        os.makedirs(cache_dir, exist_ok=True)\n    basename = get_key_hash(key)\n    return os.path.join(cache_dir, basename)\n\n\ndef get_engine_path(key):\n    return get_cache_path(key, \"tensorrt_engine_cache\") + \".trt\"\n\n\ndef get_engine_with_cache(key):\n    engine_path = get_engine_path(key)\n    if os.path.exists(engine_path):\n        return Engine(engine_path)\n    return None\n\n\ndef gen_engine(key, onnx_model, input_profile, dtype, enable_weight_streaming=False):\n    engine = Engine(get_engine_path(key))\n    s = time.time()\n    engine.build(\n        onnx_model,\n        dtype=dtype,\n        enable_refit=True,\n        timing_cache=TIMING_CACHE_PATH,\n        input_profile=input_profile,\n        enable_weight_streaming=enable_weight_streaming,\n    )\n    e = time.time()\n    _logger.info(f\"Time taken to build: {e-s}s\")\n    return engine\n\n\ndef get_refit_info_cache(key):\n    refit_info_path = get_cache_path(key, \"refit_info\") + \".st\"\n    if os.path.exists(refit_info_path):\n        return TorchTensorRTRefitInfo(refit_info_path).load()\n    return None\n\n\ndef gen_refit_info(key):\n    refit_info_path = get_cache_path(key, \"refit_info\") + \".st\"\n    return TorchTensorRTRefitInfo(refit_info_path)\n\n\nclass TorchTensorRTRefitInfo:\n    def __init__(self, info_path) -> None:\n        self.info_path = info_path\n        self.tensor_gen_map = None\n        self.constant_params_dict = None\n\n    def save(self):\n        safetensors.torch.save_file(\n            self.constant_params_dict,\n            self.info_path,\n            metadata={\"tensor_gen_map\": json.dumps(self.tensor_gen_map)},\n        )\n\n    def load(self):\n        self.constant_params_dict = safetensors.torch.load_file(self.info_path)\n        with safetensors.safe_open(self.info_path, \"torch\") as st:\n            if st.metadata() is not None:\n                self.tensor_gen_map = json.loads(st.metadata()[\"tensor_gen_map\"])\n        return self\n\n\ndef sha256sum_state_dict(state_dict: dict[str, torch.Tensor]):\n    hasher = hashlib.sha256()\n\n    for k, v in state_dict.items():\n        tensor_bytes = v.cpu().detach().numpy().astype(numpy.float16).data.tobytes()\n        hasher.update(tensor_bytes)\n\n    return hasher.hexdigest()\n"
  },
  {
    "path": "node.py",
    "content": "import torch\nfrom sfast.compilers.diffusion_pipeline_compiler import CompilationConfig\n\nfrom .module.sfast_pipeline_compiler import build_lazy_trace_module\n\n\ndef is_cuda_malloc_async():\n    return \"cudaMallocAsync\" in torch.cuda.get_allocator_backend()\n\n\ndef gen_stable_fast_config():\n    config = CompilationConfig.Default()\n    # xformers and triton are suggested for achieving best performance.\n    # It might be slow for triton to generate, compile and fine-tune kernels.\n    try:\n        import xformers\n\n        config.enable_xformers = True\n    except ImportError:\n        print(\"xformers not installed, skip\")\n    try:\n        import triton\n\n        config.enable_triton = True\n    except ImportError:\n        print(\"triton not installed, skip\")\n\n    if config.enable_triton and is_cuda_malloc_async():\n        print(\"disable stable fast triton because of cudaMallocAsync\")\n        config.enable_triton = False\n\n    # CUDA Graph is suggested for small batch sizes.\n    # After capturing, the model only accepts one fixed image size.\n    # If you want the model to be dynamic, don't enable it.\n    config.enable_cuda_graph = True\n    # config.enable_jit_freeze = False\n    return config\n\n\nclass StableFastPatch:\n    def __init__(self, model, config):\n        self.model = model\n        self.config = config\n        self.stable_fast_model = None\n\n    def __deepcopy__(self, memo=None):\n        return self\n\n    def __call__(self, model_function, params):\n        input_x = params.get(\"input\")\n        timestep_ = params.get(\"timestep\")\n        c = params.get(\"c\")\n\n        # disable with accelerate for now\n        if hasattr(model_function.__self__, \"hf_device_map\"):\n            return model_function(input_x, timestep_, **c)\n\n        if self.stable_fast_model is None:\n            self.stable_fast_model = build_lazy_trace_module(\n                self.config,\n                input_x.device,\n                id(self),\n            )\n\n        return self.stable_fast_model(\n            model_function, input_x=input_x, timestep=timestep_, **c\n        )\n\n    def to(self, device):\n        if type(device) == torch.device:\n            if self.config.enable_cuda_graph or self.config.enable_jit_freeze:\n                if device.type == \"cpu\":\n                    # comfyui tell we should move to cpu. but we cannt do it with cuda graph and freeze now.\n                    del self.stable_fast_model\n                    self.stable_fast_model = None\n                    print(\n                        \"\\33[93mWarning: Your graphics card doesn't have enough video memory to keep the model. If you experience a noticeable delay every time you start sampling, please consider disable enable_cuda_graph.\\33[0m\"\n                    )\n            else:\n                if self.stable_fast_model != None and device.type == \"cpu\":\n                    self.stable_fast_model.to_empty()\n        return self\n\n\nclass ApplyStableFastUnet:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"model\": (\"MODEL\",),\n                \"enable_cuda_graph\": (\"BOOLEAN\", {\"default\": True}),\n            }\n        }\n\n    RETURN_TYPES = (\"MODEL\",)\n    FUNCTION = \"apply_stable_fast\"\n\n    CATEGORY = \"loaders\"\n\n    def apply_stable_fast(self, model, enable_cuda_graph):\n        config = gen_stable_fast_config()\n\n        if not enable_cuda_graph:\n            config.enable_cuda_graph = False\n            config.enable_jit_freeze = False\n\n        if config.memory_format is not None:\n            model.model.to(memory_format=config.memory_format)\n\n        patch = StableFastPatch(model, config)\n        model_stable_fast = model.clone()\n        model_stable_fast.set_model_unet_function_wrapper(patch)\n        return (model_stable_fast,)\n"
  },
  {
    "path": "requirements.txt",
    "content": "zstandard\nonnx\n"
  },
  {
    "path": "tensorrt_node.py",
    "content": "import copy\nimport enum\n\nimport comfy.model_management\nimport comfy.model_patcher\nimport nodes\nimport torch\n\nfrom .module.comfy_trace.model_base import (\n    UNetModelModuleFactory,\n)\nfrom .module.comfy_trace.sd import VAEDecodeModule\nfrom .module.controlnet_tensorrt import (\n    CallableTensorRTEngineWrapperDynamicShapeControlNet,\n)\nfrom .module.openaimodel_tensorrt import (\n    TENSORRT_CONTEXT_KEY,\n    CallableTensorRTEngineWrapperDynamicShapeUNetModelForward,\n    TensorRTEngineBlockContext,\n    do_hook_forward_timestep_embed,\n    undo_hook_forward_timestep_embed,\n)\nfrom .module.sd_tensorrt import CallableTensorRTEngineWrapperDynamicShapeVAEDecode\nfrom .module.tensorrt_wrapper import TensorRTEngineConfig, TensorRTEngineContext\n\n\nclass BlockTensorRTPatch(torch.nn.Module):\n    def __init__(self, config, model_config, model_sampling_type):\n        super().__init__()\n        self.model: torch.nn.Module = None\n        self.model_config = model_config\n        self.model_sampling_type = model_sampling_type\n        self.config = config\n        self.model_device = torch.device(\"cpu\")\n        self.tensorrt_module = None\n        self.lowvram_model_memory = 0\n\n    def __deepcopy__(self, memo=None):\n        return self\n\n    @property\n    def dtype(self):\n        return self.model.dtype\n\n    def warmup(\n        self,\n        x,\n        timesteps,\n        context,\n        y,\n        control,\n        transformer_options,\n        **kwargs,\n    ):\n        warmup_input_x = torch.zeros(\n            (\n                self.config.keep_batch_size * 2,\n                x.shape[1],\n                int(self.config.keep_height / 8),\n                int(self.config.keep_width / 8),\n            ),\n            device=x.device,\n            dtype=x.dtype,\n        )\n        warmup_x = warmup_input_x\n        warmup_timesteps = torch.ones(\n            (self.config.keep_batch_size * 2,),\n            device=timesteps.device,\n            dtype=timesteps.dtype,\n        )\n        warmup_context = None\n        if context is not None:\n            warmup_context = torch.zeros(\n                (\n                    self.config.keep_batch_size * 2,\n                    self.config.keep_embedding_block * 77,\n                    context.shape[2],\n                ),\n                device=context.device,\n                dtype=context.dtype,\n            )\n        warmup_y = None\n        if y is not None:\n            warmup_y = torch.zeros(\n                (\n                    self.config.keep_batch_size * 2,\n                    y.shape[1],\n                ),\n                device=y.device,\n                dtype=y.dtype,\n            )\n\n        self(\n            warmup_x,\n            warmup_timesteps,\n            warmup_context,\n            warmup_y,\n            None,\n            {},\n            **kwargs,\n        )\n\n    def __call__(\n        self,\n        x,\n        timesteps=None,\n        context=None,\n        y=None,\n        control=None,\n        transformer_options={},\n        **kwargs,\n    ):\n        if self.tensorrt_module is None:\n            self.tensorrt_module = TensorRTEngineBlockContext()\n            self.tensorrt_module.tensorrt_context.keep_models.append(self.model)\n\n            self.tensorrt_module.tensorrt_context.model_type = (\n                self.model_config.__class__.__name__\n            )\n            self.tensorrt_module.tensorrt_context.unet_config = (\n                self.model_config.unet_config\n            )\n            self.tensorrt_module.tensorrt_context.model_sampling_type = (\n                self.model_sampling_type\n            )\n            self.tensorrt_module.tensorrt_context.cuda_stream = (\n                torch.cuda.current_stream()\n            )\n            self.tensorrt_module.tensorrt_context.cuda_device = x.device\n            \n            self.warmup(\n                x,\n                timesteps,\n                context,\n                y,\n                control,\n                transformer_options,\n                **kwargs,\n            )\n\n        transformer_options[TENSORRT_CONTEXT_KEY] = self.tensorrt_module\n\n        do_hook_forward_timestep_embed()\n        try:\n            out = self.model(\n                x,\n                timesteps,\n                context,\n                y,\n                control,\n                transformer_options,\n                **kwargs,\n            )\n        finally:\n            undo_hook_forward_timestep_embed()\n            transformer_options.pop(TENSORRT_CONTEXT_KEY)\n\n        return out\n\n    def to(self, device):\n        if type(device) is torch.device:\n            self.model_device = device\n        return self\n\n\nclass UnetTensorRTPatch(BlockTensorRTPatch):\n    def __init__(self, *args):\n        super().__init__(*args)\n        self.tensorrt_context = TensorRTEngineContext()\n\n    def __call__(\n        self,\n        x,\n        timesteps=None,\n        context=None,\n        y=None,\n        control=None,\n        transformer_options={},\n        **kwargs,\n    ):\n        if self.tensorrt_module is None:\n            devices = set((v.device for v in self.model.state_dict().values()))\n            if torch.device(\"cpu\") in devices and self.lowvram_model_memory > 0:\n                self.tensorrt_context.enable_weight_streaming = True\n                self.tensorrt_context.lowvram_model_memory = self.lowvram_model_memory\n\n            self.tensorrt_context.model_type = self.model_config.__class__.__name__\n            self.tensorrt_context.unet_config = self.model_config.unet_config\n            self.tensorrt_context.model_sampling_type = self.model_sampling_type\n\n            if self.tensorrt_context.cuda_stream is None:\n                # self.tensorrt_context.cuda_stream = torch.cuda.current_stream()\n                self.tensorrt_context.cuda_stream = torch.cuda.Stream(x.device)\n                self.tensorrt_context.infer_cuda_stream_sync = True\n\n            self.tensorrt_context.identify_weight_hash = (\n                self.config.use_dedicated_engine\n            )\n\n            self.tensorrt_context.cuda_device = x.device\n            # self.tensorrt_context.dtype = input_x.dtype\n\n            self.tensorrt_module = (\n                CallableTensorRTEngineWrapperDynamicShapeUNetModelForward(\n                    self.tensorrt_context, \"\"\n                )\n            )\n            if control is None:\n                self.warmup(\n                    x,\n                    timesteps,\n                    context,\n                    y,\n                    control,\n                    transformer_options,\n                    **kwargs,\n                )\n\n        module_factory = UNetModelModuleFactory(\n            self.model,\n            self.model_config,\n            x=x,\n            timesteps=timesteps,\n            context=context,\n            y=y,\n            control=control,\n            transformer_options=transformer_options,\n            **kwargs,\n        )\n\n        with module_factory.converted_module_context() as (m_model, m_kwargs):\n            out = self.tensorrt_module(m_model, **m_kwargs)\n\n        return out\n\n\nclass ModelUnetFunctionWrapper:\n    def __init__(self, patch):\n        self.patch = patch\n\n    def __deepcopy__(self, memo=None):\n        return self\n\n    def __call__(self, model_function, params):\n        input_x = params.get(\"input\")\n        timestep_ = params.get(\"timestep\")\n        c = params.get(\"c\")\n\n        origin_diffusion_model = model_function.__self__.diffusion_model\n        self.patch.model = origin_diffusion_model\n        model_function.__self__.diffusion_model = self.patch\n        try:\n            out = model_function(input_x, timestep_, **c)\n        finally:\n            model_function.__self__.diffusion_model = origin_diffusion_model\n\n        return out\n\n\ndef hook_memory_required(input_shape):\n    return 0\n\n\nclass TensorRTEngineOriginModelPatcherWrapper_BlockPatch(\n    comfy.model_patcher.ModelPatcher\n):\n    @staticmethod\n    def cast_from(other):\n        tcls = comfy.model_patcher.ModelPatcher\n        if isinstance(other, tcls):\n            other.__class__ = TensorRTEngineOriginModelPatcherWrapper_BlockPatch\n            return other\n        raise ValueError(f\"instance must be {tcls.__qualname__}\")\n\n    def patch_init(self, tensorrt_module_patch):\n        self.tensorrt_module_patch = tensorrt_module_patch\n\n    def patch_deinit(self):\n        self.tensorrt_module_patch = None\n        del self.tensorrt_module_patch\n\n    def cast_to_base_model(self):\n        self.patch_deinit()\n        self.__class__ = comfy.model_patcher.ModelPatcher\n        return self\n\n    def patch_model(self, device_to=None, *arg, **kwargs):\n        model = super().patch_model()\n\n        if device_to is not None:\n            for name, module in model.named_children():\n                if name in (\"diffusion_model\",):\n                    for name, module in module.named_children():\n                        if not name in (\n                            \"input_blocks\",\n                            \"middle_block\",\n                            \"output_blocks\",\n                        ):\n                            module.to(device_to)\n                else:\n                    module.to(device_to)\n            self.current_device = device_to\n\n        return model\n\n    def __del__(self):\n        self.model.to(self.current_device)\n\n\nclass TensorRTEngineOriginModelPatcherWrapper_UnetPatch(\n    comfy.model_patcher.ModelPatcher\n):\n    @staticmethod\n    def cast_from(other):\n        tcls = comfy.model_patcher.ModelPatcher\n        if isinstance(other, tcls):\n            other.__class__ = TensorRTEngineOriginModelPatcherWrapper_UnetPatch\n            return other\n        raise ValueError(f\"instance must be {tcls.__qualname__}\")\n\n    def patch_init(self, tensorrt_module_patch):\n        self.tensorrt_module_patch = tensorrt_module_patch\n\n    def patch_deinit(self):\n        self.tensorrt_module_patch = None\n        del self.tensorrt_module_patch\n\n    def cast_to_base_model(self):\n        self.patch_deinit()\n        self.__class__ = comfy.model_patcher.ModelPatcher\n        return self\n\n    def model_size(self):\n        if (\n            self.tensorrt_module_patch is None\n            or self.tensorrt_module_patch.tensorrt_module is None\n        ):\n            return super().model_size()\n        return 0\n\n    def patch_model_lowvram(\n        self,\n        device_to=None,\n        lowvram_model_memory=0,\n        force_patch_weights=False,\n        *arg,\n        **kwargs,\n    ):\n        if lowvram_model_memory > 0 and self.tensorrt_module_patch is not None:\n            self.tensorrt_module_patch.lowvram_model_memory = lowvram_model_memory\n        if (\n            self.tensorrt_module_patch is None\n            or self.tensorrt_module_patch.tensorrt_module is None\n        ):\n            return super().patch_model_lowvram(\n                device_to=device_to,\n                lowvram_model_memory=lowvram_model_memory,\n                force_patch_weights=force_patch_weights,\n                *arg,\n                **kwargs,\n            )\n        return self.patch_model(\n            device_to=device_to,\n        )\n\n    def patch_model(self, device_to=None, *arg, **kwargs):\n        model = super().patch_model()\n\n        if device_to is not None:\n            self.current_device = device_to\n\n        return model\n\n    def __del__(self):\n        self.model.to(self.current_device)\n\n\nclass PatchType(enum.Enum):\n    UNET = UnetTensorRTPatch, TensorRTEngineOriginModelPatcherWrapper_UnetPatch\n    UNET_BLOCK = BlockTensorRTPatch, TensorRTEngineOriginModelPatcherWrapper_BlockPatch\n\n\nclass ApplyTensorRTUnet:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"model\": (\"MODEL\",),\n                \"enable_cuda_graph\": (\"BOOLEAN\", {\"default\": True}),\n                \"patch_type\": ([e.name for e in PatchType], {\"default\": \"UNET\"}),\n                \"keep_width\": (\n                    \"INT\",\n                    {\"default\": 768, \"min\": 16, \"max\": nodes.MAX_RESOLUTION, \"step\": 8},\n                ),\n                \"keep_height\": (\n                    \"INT\",\n                    {\"default\": 768, \"min\": 16, \"max\": nodes.MAX_RESOLUTION, \"step\": 8},\n                ),\n                \"keep_batch_size\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 4096}),\n                \"keep_embedding_block\": (\"INT\", {\"default\": 2, \"min\": 1, \"max\": 4096}),\n                \"use_dedicated_engine\": (\"BOOLEAN\", {\"default\": False}),\n            }\n        }\n\n    RETURN_TYPES = (\"MODEL\",)\n    FUNCTION = \"apply_tensorrt\"\n\n    CATEGORY = \"loaders\"\n\n    def apply_tensorrt(\n        self,\n        model,\n        enable_cuda_graph,\n        patch_type,\n        keep_width,\n        keep_height,\n        keep_batch_size,\n        keep_embedding_block,\n        use_dedicated_engine,\n    ):\n        config = TensorRTEngineConfig(\n            enable_cuda_graph=enable_cuda_graph,\n            keep_width=keep_width,\n            keep_height=keep_height,\n            keep_batch_size=keep_batch_size,\n            keep_embedding_block=keep_embedding_block,\n            use_dedicated_engine=use_dedicated_engine,\n        )\n        patch_type_clss = PatchType[patch_type].value\n        model_tensor_rt = model.clone()\n        patch = patch_type_clss[0](\n            config, model.model.model_config, model.model.model_type\n        )\n        model_tensor_rt = patch_type_clss[1].cast_from(model_tensor_rt)\n        patch.model = model_tensor_rt\n        model_tensor_rt.set_model_unet_function_wrapper(ModelUnetFunctionWrapper(patch))\n        model_tensor_rt.patch_init(patch)\n        model_tensor_rt.add_object_patch(\"memory_required\", hook_memory_required)\n        return (model_tensor_rt,)\n\n\nclass VAEDecodeTensorRTPatch:\n    def __init__(self, model, config):\n        self.model = model\n        self.org_decode = model.first_stage_model.decode\n        self.config = config\n        self.tensorrt_context = TensorRTEngineContext()\n        self.tensorrt_module = None\n\n    def warmup(self, samples_in):\n        warmup_samples = torch.zeros(\n            (\n                1,\n                samples_in.shape[1],\n                int(self.config.keep_height / 8),\n                int(self.config.keep_width / 8),\n            ),\n            device=samples_in.device,\n            dtype=samples_in.dtype,\n        )\n\n        self(warmup_samples)\n\n    def __call__(self, samples_in):\n        if self.tensorrt_module is None:\n            self.tensorrt_module = CallableTensorRTEngineWrapperDynamicShapeVAEDecode(\n                self.tensorrt_context, \"\"\n            )\n            self.warmup(samples_in)\n\n        self.tensorrt_context.cuda_stream = torch.cuda.current_stream()\n        self.tensorrt_context.cuda_device = samples_in.device\n        self.tensorrt_context.dtype = samples_in.dtype\n\n        batch_number = 1\n        pixel_samples = torch.empty(\n            (\n                samples_in.shape[0],\n                3,\n                round(samples_in.shape[2] * 8),\n                round(samples_in.shape[3] * 8),\n            ),\n            device=samples_in.device,\n        )\n        for x in range(0, samples_in.shape[0], batch_number):\n            samples = samples_in[x : x + batch_number]\n            pixel_samples[x : x + batch_number] = self.tensorrt_module(\n                VAEDecodeModule(self.model.first_stage_model, self.org_decode),\n                samples=samples,\n            )\n        return pixel_samples\n\n\nclass ApplyTensorRTVaeDecoder:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"vae\": (\"VAE\",),\n                \"enable_cuda_graph\": (\"BOOLEAN\", {\"default\": False}),\n                \"keep_width\": (\n                    \"INT\",\n                    {\"default\": 768, \"min\": 16, \"max\": nodes.MAX_RESOLUTION, \"step\": 8},\n                ),\n                \"keep_height\": (\n                    \"INT\",\n                    {\"default\": 768, \"min\": 16, \"max\": nodes.MAX_RESOLUTION, \"step\": 8},\n                ),\n            }\n        }\n\n    RETURN_TYPES = (\"VAE\",)\n    FUNCTION = \"apply_tensorrt\"\n\n    CATEGORY = \"loaders\"\n\n    def apply_tensorrt(\n        self,\n        vae,\n        enable_cuda_graph,\n        keep_width,\n        keep_height,\n    ):\n        # hook comfy/sd.py#VAE.patcher\n        config = TensorRTEngineConfig(\n            enable_cuda_graph=enable_cuda_graph,\n            keep_width=keep_width,\n            keep_height=keep_height,\n        )\n        patch = VAEDecodeTensorRTPatch(vae, config)\n        vae_tensor_rt = copy.copy(vae)\n        vae_tensor_rt.patcher = vae_tensor_rt.patcher.clone()\n        vae_tensor_rt.patcher.add_object_patch(\"decode\", patch)\n        return (vae_tensor_rt,)\n\n\nclass ControlNetTensorRTPatch:\n    def __init__(self, control_model, config):\n        self.control_model = control_model\n        self.config = config\n        self.tensorrt_context = TensorRTEngineContext()\n        self.tensorrt_module = None\n        self.dtype = torch.float16\n\n    def state_dict(self):\n        return self.control_model.state_dict()\n\n    def to(self, device):\n        return self.control_model.to(device)\n\n    def warmup(self, x, hint, timesteps, context, y=None):\n        warmup_x = torch.zeros(\n            (\n                self.config.keep_batch_size * 2,\n                x.shape[1],\n                int(self.config.keep_height / 8),\n                int(self.config.keep_width / 8),\n            ),\n            device=x.device,\n            dtype=x.dtype,\n        )\n        warmup_hint = torch.zeros(\n            (\n                self.config.keep_batch_size,\n                hint.shape[1],\n                self.config.keep_height,\n                self.config.keep_width,\n            ),\n            device=hint.device,\n            dtype=hint.dtype,\n        )\n        warmup_timesteps = torch.ones(\n            (self.config.keep_batch_size * 2,),\n            device=timesteps.device,\n            dtype=timesteps.dtype,\n        )\n        warmup_context = torch.zeros(\n            (\n                self.config.keep_batch_size * 2,\n                self.config.keep_embedding_block * 77,\n                context.shape[2],\n            ),\n            device=context.device,\n            dtype=context.dtype,\n        )\n\n        self(warmup_x, warmup_hint, warmup_timesteps, warmup_context, y)\n\n    def __call__(self, x, hint, timesteps, context, y=None):\n        if self.tensorrt_module == None:\n            self.tensorrt_module = CallableTensorRTEngineWrapperDynamicShapeControlNet(\n                self.tensorrt_context, \"\"\n            )\n            self.warmup(x, hint, timesteps, context, y)\n\n        self.tensorrt_context.cuda_stream = torch.cuda.current_stream()\n        self.tensorrt_context.cuda_device = x.device\n        self.tensorrt_context.dtype = x.dtype\n\n        return self.tensorrt_module(\n            self.control_model,\n            x=x,\n            hint=hint,\n            timesteps=timesteps,\n            context=context,\n            y=y,\n        )\n\n\nclass ApplyTensorRTControlNet:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"control_net\": (\"CONTROL_NET\",),\n                \"enable_cuda_graph\": (\"BOOLEAN\", {\"default\": True}),\n                \"keep_width\": (\n                    \"INT\",\n                    {\"default\": 768, \"min\": 16, \"max\": nodes.MAX_RESOLUTION, \"step\": 8},\n                ),\n                \"keep_height\": (\n                    \"INT\",\n                    {\"default\": 768, \"min\": 16, \"max\": nodes.MAX_RESOLUTION, \"step\": 8},\n                ),\n                \"keep_batch_size\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 4096}),\n            }\n        }\n\n    RETURN_TYPES = (\"CONTROL_NET\",)\n    FUNCTION = \"apply_tensorrt\"\n\n    CATEGORY = \"loaders\"\n\n    def apply_tensorrt(\n        self,\n        control_net,\n        enable_cuda_graph,\n        keep_width,\n        keep_height,\n        keep_batch_size,\n    ):\n        # hook comfy/controlnet.py#ControlNet.control_model_wrapped\n        config = TensorRTEngineConfig(\n            enable_cuda_graph=enable_cuda_graph,\n            keep_width=keep_width,\n            keep_height=keep_height,\n            keep_batch_size=keep_batch_size,\n        )\n        patch = ControlNetTensorRTPatch(control_net.control_model, config)\n        control_net_tensor_rt = copy.copy(control_net)\n        control_net_tensor_rt.control_model = patch\n        control_net_tensor_rt = control_net_tensor_rt.copy()\n        return (control_net_tensor_rt,)\n"
  },
  {
    "path": "tests/workflow.json",
    "content": "{\n  \"last_node_id\": 16,\n  \"last_link_id\": 31,\n  \"nodes\": [\n    {\n      \"id\": 7,\n      \"type\": \"CLIPTextEncode\",\n      \"pos\": [\n        370,\n        460\n      ],\n      \"size\": {\n        \"0\": 214.6455841064453,\n        \"1\": 108.3536148071289\n      },\n      \"flags\": {},\n      \"order\": 4,\n      \"mode\": 0,\n      \"inputs\": [\n        {\n          \"name\": \"clip\",\n          \"type\": \"CLIP\",\n          \"link\": 26\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"CONDITIONING\",\n          \"type\": \"CONDITIONING\",\n          \"links\": [\n            6\n          ],\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"CLIPTextEncode\"\n      },\n      \"widgets_values\": [\n        \"text, watermark\"\n      ]\n    },\n    {\n      \"id\": 6,\n      \"type\": \"CLIPTextEncode\",\n      \"pos\": [\n        370,\n        290\n      ],\n      \"size\": {\n        \"0\": 210,\n        \"1\": 126.86872863769531\n      },\n      \"flags\": {},\n      \"order\": 3,\n      \"mode\": 0,\n      \"inputs\": [\n        {\n          \"name\": \"clip\",\n          \"type\": \"CLIP\",\n          \"link\": 25\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"CONDITIONING\",\n          \"type\": \"CONDITIONING\",\n          \"links\": [\n            4\n          ],\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"CLIPTextEncode\"\n      },\n      \"widgets_values\": [\n        \"1girl\"\n      ]\n    },\n    {\n      \"id\": 8,\n      \"type\": \"VAEDecode\",\n      \"pos\": [\n        970,\n        270\n      ],\n      \"size\": {\n        \"0\": 210,\n        \"1\": 46\n      },\n      \"flags\": {},\n      \"order\": 9,\n      \"mode\": 0,\n      \"inputs\": [\n        {\n          \"name\": \"samples\",\n          \"type\": \"LATENT\",\n          \"link\": 7\n        },\n        {\n          \"name\": \"vae\",\n          \"type\": \"VAE\",\n          \"link\": 8\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"IMAGE\",\n          \"type\": \"IMAGE\",\n          \"links\": [\n            10\n          ],\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"VAEDecode\"\n      }\n    },\n    {\n      \"id\": 5,\n      \"type\": \"EmptyLatentImage\",\n      \"pos\": [\n        370,\n        610\n      ],\n      \"size\": {\n        \"0\": 210,\n        \"1\": 106\n      },\n      \"flags\": {},\n      \"order\": 0,\n      \"mode\": 0,\n      \"outputs\": [\n        {\n          \"name\": \"LATENT\",\n          \"type\": \"LATENT\",\n          \"links\": [\n            2\n          ],\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"EmptyLatentImage\"\n      },\n      \"widgets_values\": [\n        512,\n        512,\n        1\n      ]\n    },\n    {\n      \"id\": 11,\n      \"type\": \"ApplyStableFastUnet\",\n      \"pos\": [\n        369,\n        180\n      ],\n      \"size\": {\n        \"0\": 210,\n        \"1\": 58\n      },\n      \"flags\": {},\n      \"order\": 6,\n      \"mode\": 0,\n      \"inputs\": [\n        {\n          \"name\": \"model\",\n          \"type\": \"MODEL\",\n          \"link\": 24\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"MODEL\",\n          \"type\": \"MODEL\",\n          \"links\": [\n            30\n          ],\n          \"shape\": 3,\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"ApplyStableFastUnet\"\n      },\n      \"widgets_values\": [\n        true\n      ]\n    },\n    {\n      \"id\": 16,\n      \"type\": \"ApplyTensorRTUnet\",\n      \"pos\": [\n        614,\n        31\n      ],\n      \"size\": {\n        \"0\": 315,\n        \"1\": 202\n      },\n      \"flags\": {},\n      \"order\": 7,\n      \"mode\": 4,\n      \"inputs\": [\n        {\n          \"name\": \"model\",\n          \"type\": \"MODEL\",\n          \"link\": 30\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"MODEL\",\n          \"type\": \"MODEL\",\n          \"links\": [\n            31\n          ],\n          \"shape\": 3,\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"ApplyTensorRTUnet\"\n      },\n      \"widgets_values\": [\n        true,\n        \"UNET_BLOCK\",\n        true,\n        768,\n        768,\n        1,\n        2\n      ]\n    },\n    {\n      \"id\": 15,\n      \"type\": \"PatchModelAddDownscale\",\n      \"pos\": [\n        13,\n        336\n      ],\n      \"size\": {\n        \"0\": 315,\n        \"1\": 202\n      },\n      \"flags\": {},\n      \"order\": 2,\n      \"mode\": 4,\n      \"inputs\": [\n        {\n          \"name\": \"model\",\n          \"type\": \"MODEL\",\n          \"link\": 28\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"MODEL\",\n          \"type\": \"MODEL\",\n          \"links\": [\n            29\n          ],\n          \"shape\": 3,\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"PatchModelAddDownscale\"\n      },\n      \"widgets_values\": [\n        4,\n        2,\n        0,\n        0.35,\n        true,\n        \"bicubic\",\n        \"bicubic\"\n      ]\n    },\n    {\n      \"id\": 4,\n      \"type\": \"CheckpointLoaderSimple\",\n      \"pos\": [\n        12,\n        599\n      ],\n      \"size\": {\n        \"0\": 315,\n        \"1\": 98\n      },\n      \"flags\": {},\n      \"order\": 1,\n      \"mode\": 0,\n      \"outputs\": [\n        {\n          \"name\": \"MODEL\",\n          \"type\": \"MODEL\",\n          \"links\": [\n            28\n          ],\n          \"slot_index\": 0\n        },\n        {\n          \"name\": \"CLIP\",\n          \"type\": \"CLIP\",\n          \"links\": [\n            25,\n            26\n          ],\n          \"slot_index\": 1\n        },\n        {\n          \"name\": \"VAE\",\n          \"type\": \"VAE\",\n          \"links\": [\n            8\n          ],\n          \"slot_index\": 2\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"CheckpointLoaderSimple\"\n      },\n      \"widgets_values\": [\n        \"majicmixRealistic_v6.safetensors\"\n      ]\n    },\n    {\n      \"id\": 3,\n      \"type\": \"KSampler\",\n      \"pos\": [\n        620,\n        280\n      ],\n      \"size\": {\n        \"0\": 315,\n        \"1\": 262\n      },\n      \"flags\": {},\n      \"order\": 8,\n      \"mode\": 0,\n      \"inputs\": [\n        {\n          \"name\": \"model\",\n          \"type\": \"MODEL\",\n          \"link\": 31\n        },\n        {\n          \"name\": \"positive\",\n          \"type\": \"CONDITIONING\",\n          \"link\": 4\n        },\n        {\n          \"name\": \"negative\",\n          \"type\": \"CONDITIONING\",\n          \"link\": 6\n        },\n        {\n          \"name\": \"latent_image\",\n          \"type\": \"LATENT\",\n          \"link\": 2\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"LATENT\",\n          \"type\": \"LATENT\",\n          \"links\": [\n            7\n          ],\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"KSampler\"\n      },\n      \"widgets_values\": [\n        614147636169226,\n        \"randomize\",\n        20,\n        6,\n        \"euler_ancestral\",\n        \"karras\",\n        1\n      ]\n    },\n    {\n      \"id\": 10,\n      \"type\": \"PreviewImage\",\n      \"pos\": [\n        956,\n        367\n      ],\n      \"size\": {\n        \"0\": 271.382568359375,\n        \"1\": 412.4344177246094\n      },\n      \"flags\": {},\n      \"order\": 10,\n      \"mode\": 0,\n      \"inputs\": [\n        {\n          \"name\": \"images\",\n          \"type\": \"IMAGE\",\n          \"link\": 10\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"PreviewImage\"\n      }\n    },\n    {\n      \"id\": 14,\n      \"type\": \"FreeU\",\n      \"pos\": [\n        17,\n        147\n      ],\n      \"size\": {\n        \"0\": 315,\n        \"1\": 130\n      },\n      \"flags\": {},\n      \"order\": 5,\n      \"mode\": 4,\n      \"inputs\": [\n        {\n          \"name\": \"model\",\n          \"type\": \"MODEL\",\n          \"link\": 29\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"MODEL\",\n          \"type\": \"MODEL\",\n          \"links\": [\n            24\n          ],\n          \"shape\": 3,\n          \"slot_index\": 0\n        }\n      ],\n      \"properties\": {\n        \"Node name for S&R\": \"FreeU\"\n      },\n      \"widgets_values\": [\n        1.2466668701171875,\n        1.2,\n        0.9,\n        0.2\n      ]\n    }\n  ],\n  \"links\": [\n    [\n      2,\n      5,\n      0,\n      3,\n      3,\n      \"LATENT\"\n    ],\n    [\n      4,\n      6,\n      0,\n      3,\n      1,\n      \"CONDITIONING\"\n    ],\n    [\n      6,\n      7,\n      0,\n      3,\n      2,\n      \"CONDITIONING\"\n    ],\n    [\n      7,\n      3,\n      0,\n      8,\n      0,\n      \"LATENT\"\n    ],\n    [\n      8,\n      4,\n      2,\n      8,\n      1,\n      \"VAE\"\n    ],\n    [\n      10,\n      8,\n      0,\n      10,\n      0,\n      \"IMAGE\"\n    ],\n    [\n      24,\n      14,\n      0,\n      11,\n      0,\n      \"MODEL\"\n    ],\n    [\n      25,\n      4,\n      1,\n      6,\n      0,\n      \"CLIP\"\n    ],\n    [\n      26,\n      4,\n      1,\n      7,\n      0,\n      \"CLIP\"\n    ],\n    [\n      28,\n      4,\n      0,\n      15,\n      0,\n      \"MODEL\"\n    ],\n    [\n      29,\n      15,\n      0,\n      14,\n      0,\n      \"MODEL\"\n    ],\n    [\n      30,\n      11,\n      0,\n      16,\n      0,\n      \"MODEL\"\n    ],\n    [\n      31,\n      16,\n      0,\n      3,\n      0,\n      \"MODEL\"\n    ]\n  ],\n  \"groups\": [],\n  \"config\": {},\n  \"extra\": {},\n  \"version\": 0.4\n}"
  }
]