[
  {
    "path": ".github/workflows/publish.yml",
    "content": "name: Publish to Comfy registry\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - main\n      - master\n    paths:\n      - \"pyproject.toml\"\n\njobs:\n  publish-node:\n    name: Publish Custom Node to registry\n    runs-on: ubuntu-latest\n    steps:\n      - name: Check out code\n        uses: actions/checkout@v4\n      - name: Publish Custom Node\n        uses: Comfy-Org/publish-node-action@main\n        with:\n          ## Add your own personal access token to your Github Repository secrets and reference it here.\n          personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": ".DS_Store\n*pyc\n.vscode\n__pycache__\n*.egg-info\n\ncheckpoints\nToonCrafter/checkpoints\nresults\nbackup\nLOG\n/models\nToonCrafter/tmp\nThumbs.db"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright Tencent\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "ToonCrafter/.gitignore",
    "content": ".DS_Store\n*pyc\n.vscode\n__pycache__\n*.egg-info\n\ncheckpoints\nresults\nbackup\nLOG"
  },
  {
    "path": "ToonCrafter/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright Tencent\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "ToonCrafter/README.md",
    "content": "## ___***ToonCrafter: Generative Cartoon Interpolation***___\n<!-- ![](./assets/logo_long.png#gh-light-mode-only){: width=\"50%\"} -->\n<!-- ![](./assets/logo_long_dark.png#gh-dark-mode-only=100x20) -->\n<div align=\"center\">\n\n\n\n</div>\n \n## 🔆 Introduction\n\n⚠️ Please check our [disclaimer](#disc) first.\n\n🤗 ToonCrafter can interpolate two cartoon images by leveraging the pre-trained image-to-video diffusion priors. Please check our project page and paper for more information. <br>\n\n\n\n\n\n\n\n### 1.1 Showcases (512x320)\n<table class=\"center\">\n    <tr style=\"font-weight: bolder;text-align:center;\">\n        <td>Input starting frame</td>\n        <td>Input ending frame</td>\n        <td>Generated video</td>\n    </tr>\n  <tr>\n  <td>\n    <img src=assets/72109_125.mp4_00-00.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/72109_125.mp4_00-01.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/00.gif width=\"250\">\n  </td>\n  </tr>\n\n\n   <tr>\n  <td>\n    <img src=assets/Japan_v2_2_062266_s2_frame1.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/Japan_v2_2_062266_s2_frame3.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/03.gif width=\"250\">\n  </td>\n  </tr>\n  <tr>\n  <td>\n    <img src=assets/Japan_v2_1_070321_s3_frame1.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/Japan_v2_1_070321_s3_frame3.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/02.gif width=\"250\">\n  </td>\n  </tr> \n  <tr>\n  <td>\n    <img src=assets/74302_1349_frame1.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/74302_1349_frame3.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/01.gif width=\"250\">\n  </td>\n  </tr>\n</table>\n\n### 1.2 Sparse sketch guidance\n<table class=\"center\">\n    <tr style=\"font-weight: bolder;text-align:center;\">\n        <td>Input starting frame</td>\n        <td>Input ending frame</td>\n        <td>Input sketch guidance</td>\n        <td>Generated video</td>\n    </tr>\n  <tr>\n  <td>\n    <img src=assets/72105_388.mp4_00-00.png width=\"200\">\n  </td>\n  <td>\n    <img src=assets/72105_388.mp4_00-01.png width=\"200\">\n  </td>\n  <td>\n    <img src=assets/06.gif width=\"200\">\n  </td>\n   <td>\n    <img src=assets/07.gif width=\"200\">\n  </td>\n  </tr>\n\n  <tr>\n  <td>\n    <img src=assets/72110_255.mp4_00-00.png width=\"200\">\n  </td>\n  <td>\n    <img src=assets/72110_255.mp4_00-01.png width=\"200\">\n  </td>\n  <td>\n    <img src=assets/12.gif width=\"200\">\n  </td>\n   <td>\n    <img src=assets/13.gif width=\"200\">\n  </td>\n  </tr>\n\n\n</table>\n\n\n### 2. Applications\n#### 2.1 Cartoon Sketch Interpolation (see project page for more details)\n<table class=\"center\">\n    <tr style=\"font-weight: bolder;text-align:center;\">\n        <td>Input starting frame</td>\n        <td>Input ending frame</td>\n        <td>Generated video</td>\n    </tr>\n\n  <tr>\n  <td>\n    <img src=assets/frame0001_10.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/frame0016_10.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/10.gif width=\"250\">\n  </td>\n  </tr>\n\n\n   <tr>\n  <td>\n    <img src=assets/frame0001_11.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/frame0016_11.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/11.gif width=\"250\">\n  </td>\n  </tr>\n\n</table>\n\n\n#### 2.2 Reference-based Sketch Colorization\n<table class=\"center\">\n    <tr style=\"font-weight: bolder;text-align:center;\">\n        <td>Input sketch</td>\n        <td>Input reference</td>\n        <td>Colorization results</td>\n    </tr>\n    \n  <tr>\n  <td>\n    <img src=assets/04.gif width=\"250\">\n  </td>\n  <td>\n    <img src=assets/frame0001_05.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/05.gif width=\"250\">\n  </td>\n  </tr>\n\n\n   <tr>\n  <td>\n    <img src=assets/08.gif width=\"250\">\n  </td>\n  <td>\n    <img src=assets/frame0001_09.png width=\"250\">\n  </td>\n  <td>\n    <img src=assets/09.gif width=\"250\">\n  </td>\n  </tr>\n\n</table>\n\n\n\n\n\n\n\n## 📝 Changelog\n- [ ] Add sketch control and colorization function.\n- __[2024.05.29]__: 🔥🔥 Release code and model weights.\n- __[2024.05.28]__: Launch the project page and update the arXiv preprint.\n<br>\n\n\n## 🧰 Models\n\n|Model|Resolution|GPU Mem. & Inference Time (A100, ddim 50steps)|Checkpoint|\n|:---------|:---------|:--------|:--------|\n|ToonCrafter_512|320x512| TBD (`perframe_ae=True`)|[Hugging Face](https://huggingface.co/Doubiiu/ToonCrafter/blob/main/model.ckpt)|\n\n\nCurrently, our ToonCrafter can support generating videos of up to 16 frames with a resolution of 512x320. The inference time can be reduced by using fewer DDIM steps.\n\n\n\n## ⚙️ Setup\n\n### Install Environment via Anaconda (Recommended)\n```bash\nconda create -n tooncrafter python=3.8.5\nconda activate tooncrafter\npip install -r requirements.txt\n```\n\n\n## 💫 Inference\n### 1. Command line\n\nDownload pretrained ToonCrafter_512 and put the `model.ckpt` in `checkpoints/tooncrafter_512_interp_v1/model.ckpt`.\n```bash\n  sh scripts/run.sh\n```\n\n\n### 2. Local Gradio demo\n\nDownload the pretrained model and put it in the corresponding directory according to the previous guidelines.\n```bash\n  python gradio_app.py \n```\n\n\n\n\n\n\n<!-- ## 🤝 Community Support -->\n\n\n\n<a name=\"disc\"></a>\n## 📢 Disclaimer\nCalm down. Our framework opens up the era of generative cartoon interpolation, but due to the variaity of generative video prior, the success rate is not guaranteed.\n\n⚠️This is an open-source research exploration, instead of commercial products. It can't meet all your expectations.\n\nThis project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.\n****"
  },
  {
    "path": "ToonCrafter/__init__.py",
    "content": "import sys\nfrom pathlib import Path\nsys.path.append(Path(__file__).parent.as_posix())\n"
  },
  {
    "path": "ToonCrafter/cldm/cldm.py",
    "content": "import einops\nimport torch\nimport torch as th\nimport torch.nn as nn\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import (\n    conv_nd,\n    linear,\n    zero_module,\n    timestep_embedding,\n)\n\nfrom einops import rearrange, repeat\nfrom torchvision.utils import make_grid\nfrom ToonCrafter.ldm.modules.attention import SpatialTransformer\nfrom ToonCrafter.ldm.modules.diffusionmodules.openaimodel import TimestepEmbedSequential, ResBlock, Downsample, AttentionBlock\nfrom lvdm.modules.networks.openaimodel3d import UNetModel\nfrom ToonCrafter.ldm.models.diffusion.ddpm import LatentDiffusion\nfrom ToonCrafter.ldm.util import log_txt_as_img, exists, instantiate_from_config\nfrom ToonCrafter.ldm.models.diffusion.ddim import DDIMSampler\n\n\nclass ControlledUnetModel(UNetModel):\n    def forward(self, x, timesteps, context=None, features_adapter=None, fs=None, control = None, **kwargs):\n        b,_,t,_,_ = x.shape\n        t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False).type(x.dtype)\n        emb = self.time_embed(t_emb)\n        ## repeat t times for context [(b t) 77 768] & time embedding\n        ## check if we use per-frame image conditioning\n        _, l_context, _ = context.shape\n        if l_context == 77 + t*16: ## !!! HARD CODE here\n            context_text, context_img = context[:,:77,:], context[:,77:,:]\n            context_text = context_text.repeat_interleave(repeats=t, dim=0)\n            context_img = rearrange(context_img, 'b (t l) c -> (b t) l c', t=t)\n            context = torch.cat([context_text, context_img], dim=1)\n        else:\n            context = context.repeat_interleave(repeats=t, dim=0)\n        emb = emb.repeat_interleave(repeats=t, dim=0)\n        \n        ## always in shape (b t) c h w, except for temporal layer\n        x = rearrange(x, 'b c t h w -> (b t) c h w')\n\n        ## combine emb\n        if self.fs_condition:\n            if fs is None:\n                fs = torch.tensor(\n                    [self.default_fs] * b, dtype=torch.long, device=x.device)\n            fs_emb = timestep_embedding(fs, self.model_channels, repeat_only=False).type(x.dtype)\n\n            fs_embed = self.fps_embedding(fs_emb)\n            fs_embed = fs_embed.repeat_interleave(repeats=t, dim=0)\n            emb = emb + fs_embed\n\n        h = x.type(self.dtype)\n        adapter_idx = 0\n        hs = []\n        with torch.no_grad():\n            for id, module in enumerate(self.input_blocks):\n                h = module(h, emb, context=context, batch_size=b)\n                if id ==0 and self.addition_attention:\n                    h = self.init_attn(h, emb, context=context, batch_size=b)\n                ## plug-in adapter features\n                if ((id+1)%3 == 0) and features_adapter is not None:\n                    h = h + features_adapter[adapter_idx]\n                    adapter_idx += 1\n                hs.append(h)\n            if features_adapter is not None:\n                assert len(features_adapter)==adapter_idx, 'Wrong features_adapter'\n\n            h = self.middle_block(h, emb, context=context, batch_size=b)\n        \n        if control is not None:\n            h += control.pop()\n        \n        for module in self.output_blocks:\n            if control is None:\n                h = torch.cat([h, hs.pop()], dim=1)\n            else:\n                h = torch.cat([h, hs.pop() + control.pop()], dim=1)\n            h = module(h, emb, context=context, batch_size=b)\n\n        h = h.type(x.dtype)\n        y = self.out(h)\n        \n        # reshape back to (b c t h w)\n        y = rearrange(y, '(b t) c h w -> b c t h w', b=b)\n        return y\n\n\nclass ControlNet(nn.Module):\n    def __init__(\n            self,\n            image_size,\n            in_channels,\n            model_channels,\n            hint_channels,\n            num_res_blocks,\n            attention_resolutions,\n            dropout=0,\n            channel_mult=(1, 2, 4, 8),\n            conv_resample=True,\n            dims=2,\n            use_checkpoint=False,\n            use_fp16=False,\n            num_heads=-1,\n            num_head_channels=-1,\n            num_heads_upsample=-1,\n            use_scale_shift_norm=False,\n            resblock_updown=False,\n            use_new_attention_order=False,\n            use_spatial_transformer=False,  # custom transformer support\n            transformer_depth=1,  # custom transformer support\n            context_dim=None,  # custom transformer support\n            n_embed=None,  # custom support for prediction of discrete ids into codebook of first stage vq model\n            legacy=True,\n            disable_self_attentions=None,\n            num_attention_blocks=None,\n            disable_middle_self_attn=False,\n            use_linear_in_transformer=False,\n    ):\n        super().__init__()\n        if use_spatial_transformer:\n            assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'\n\n        if context_dim is not None:\n            assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'\n            from omegaconf.listconfig import ListConfig\n            if type(context_dim) == ListConfig:\n                context_dim = list(context_dim)\n\n        if num_heads_upsample == -1:\n            num_heads_upsample = num_heads\n\n        if num_heads == -1:\n            assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'\n\n        if num_head_channels == -1:\n            assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'\n\n        self.dims = dims\n        self.image_size = image_size\n        self.in_channels = in_channels\n        self.model_channels = model_channels\n        if isinstance(num_res_blocks, int):\n            self.num_res_blocks = len(channel_mult) * [num_res_blocks]\n        else:\n            if len(num_res_blocks) != len(channel_mult):\n                raise ValueError(\"provide num_res_blocks either as an int (globally constant) or \"\n                                 \"as a list/tuple (per-level) with the same length as channel_mult\")\n            self.num_res_blocks = num_res_blocks\n        if disable_self_attentions is not None:\n            # should be a list of booleans, indicating whether to disable self-attention in TransformerBlocks or not\n            assert len(disable_self_attentions) == len(channel_mult)\n        if num_attention_blocks is not None:\n            assert len(num_attention_blocks) == len(self.num_res_blocks)\n            assert all(map(lambda i: self.num_res_blocks[i] >= num_attention_blocks[i], range(len(num_attention_blocks))))\n            print(f\"Constructor of UNetModel received num_attention_blocks={num_attention_blocks}. \"\n                  f\"This option has LESS priority than attention_resolutions {attention_resolutions}, \"\n                  f\"i.e., in cases where num_attention_blocks[i] > 0 but 2**i not in attention_resolutions, \"\n                  f\"attention will still not be set.\")\n\n        self.attention_resolutions = attention_resolutions\n        self.dropout = dropout\n        self.channel_mult = channel_mult\n        self.conv_resample = conv_resample\n        self.use_checkpoint = use_checkpoint\n        self.dtype = th.float16 if use_fp16 else th.float32\n        self.num_heads = num_heads\n        self.num_head_channels = num_head_channels\n        self.num_heads_upsample = num_heads_upsample\n        self.predict_codebook_ids = n_embed is not None\n\n        time_embed_dim = model_channels * 4\n        self.time_embed = nn.Sequential(\n            linear(model_channels, time_embed_dim),\n            nn.SiLU(),\n            linear(time_embed_dim, time_embed_dim),\n        )\n\n        self.input_blocks = nn.ModuleList(\n            [\n                TimestepEmbedSequential(\n                    conv_nd(dims, in_channels, model_channels, 3, padding=1)\n                )\n            ]\n        )\n        self.zero_convs = nn.ModuleList([self.make_zero_conv(model_channels)])\n\n        self.input_hint_block = TimestepEmbedSequential(\n            conv_nd(dims, hint_channels, 16, 3, padding=1),\n            nn.SiLU(),\n            conv_nd(dims, 16, 16, 3, padding=1),\n            nn.SiLU(),\n            conv_nd(dims, 16, 32, 3, padding=1, stride=2),\n            nn.SiLU(),\n            conv_nd(dims, 32, 32, 3, padding=1),\n            nn.SiLU(),\n            conv_nd(dims, 32, 96, 3, padding=1, stride=2),\n            nn.SiLU(),\n            conv_nd(dims, 96, 96, 3, padding=1),\n            nn.SiLU(),\n            conv_nd(dims, 96, 256, 3, padding=1, stride=2),\n            nn.SiLU(),\n            zero_module(conv_nd(dims, 256, model_channels, 3, padding=1))\n        )\n\n        self._feature_size = model_channels\n        input_block_chans = [model_channels]\n        ch = model_channels\n        ds = 1\n        for level, mult in enumerate(channel_mult):\n            for nr in range(self.num_res_blocks[level]):\n                layers = [\n                    ResBlock(\n                        ch,\n                        time_embed_dim,\n                        dropout,\n                        out_channels=mult * model_channels,\n                        dims=dims,\n                        use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm,\n                    )\n                ]\n                ch = mult * model_channels\n                if ds in attention_resolutions:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    if legacy:\n                        # num_heads = 1\n                        dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n                    if exists(disable_self_attentions):\n                        disabled_sa = disable_self_attentions[level]\n                    else:\n                        disabled_sa = False\n\n                    if not exists(num_attention_blocks) or nr < num_attention_blocks[level]:\n                        layers.append(\n                            AttentionBlock(\n                                ch,\n                                use_checkpoint=use_checkpoint,\n                                num_heads=num_heads,\n                                num_head_channels=dim_head,\n                                use_new_attention_order=use_new_attention_order,\n                            ) if not use_spatial_transformer else SpatialTransformer(\n                                ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,\n                                disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer,\n                                use_checkpoint=use_checkpoint\n                            )\n                        )\n                self.input_blocks.append(TimestepEmbedSequential(*layers))\n                self.zero_convs.append(self.make_zero_conv(ch))\n                self._feature_size += ch\n                input_block_chans.append(ch)\n            if level != len(channel_mult) - 1:\n                out_ch = ch\n                self.input_blocks.append(\n                    TimestepEmbedSequential(\n                        ResBlock(\n                            ch,\n                            time_embed_dim,\n                            dropout,\n                            out_channels=out_ch,\n                            dims=dims,\n                            use_checkpoint=use_checkpoint,\n                            use_scale_shift_norm=use_scale_shift_norm,\n                            down=True,\n                        )\n                        if resblock_updown\n                        else Downsample(\n                            ch, conv_resample, dims=dims, out_channels=out_ch\n                        )\n                    )\n                )\n                ch = out_ch\n                input_block_chans.append(ch)\n                self.zero_convs.append(self.make_zero_conv(ch))\n                ds *= 2\n                self._feature_size += ch\n\n        if num_head_channels == -1:\n            dim_head = ch // num_heads\n        else:\n            num_heads = ch // num_head_channels\n            dim_head = num_head_channels\n        if legacy:\n            # num_heads = 1\n            dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n        self.middle_block = TimestepEmbedSequential(\n            ResBlock(\n                ch,\n                time_embed_dim,\n                dropout,\n                dims=dims,\n                use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm,\n            ),\n            AttentionBlock(\n                ch,\n                use_checkpoint=use_checkpoint,\n                num_heads=num_heads,\n                num_head_channels=dim_head,\n                use_new_attention_order=use_new_attention_order,\n            ) if not use_spatial_transformer else SpatialTransformer(  # always uses a self-attn\n                ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,\n                disable_self_attn=disable_middle_self_attn, use_linear=use_linear_in_transformer,\n                use_checkpoint=use_checkpoint\n            ),\n            ResBlock(\n                ch,\n                time_embed_dim,\n                dropout,\n                dims=dims,\n                use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm,\n            ),\n        )\n        self.middle_block_out = self.make_zero_conv(ch)\n        self._feature_size += ch\n\n    def make_zero_conv(self, channels):\n        return TimestepEmbedSequential(zero_module(conv_nd(self.dims, channels, channels, 1, padding=0)))\n\n    def forward(self, x, hint, timesteps, context, **kwargs):\n        t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)\n        emb = self.time_embed(t_emb)\n\n        guided_hint = self.input_hint_block(hint, emb, context)\n\n        outs = []\n\n        h = x.type(self.dtype)\n     \n        for module, zero_conv in zip(self.input_blocks, self.zero_convs):\n            if guided_hint is not None:\n                h = module(h, emb, context)\n                h += guided_hint\n                guided_hint = None\n            else:\n                h = module(h, emb, context)\n            outs.append(zero_conv(h, emb, context, True))\n\n        h = self.middle_block(h, emb, context)\n        outs.append(self.middle_block_out(h, emb, context))\n\n        return outs\n\n\nclass ControlLDM(LatentDiffusion):\n\n    def __init__(self, control_stage_config, control_key, only_mid_control, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.control_model = instantiate_from_config(control_stage_config)\n        self.control_key = control_key\n        self.only_mid_control = only_mid_control\n        self.control_scales = [1.0] * 13\n\n    @torch.no_grad()\n    def get_input(self, batch, k, bs=None, *args, **kwargs):\n        x, c = super().get_input(batch, self.first_stage_key, *args, **kwargs)\n        control = batch[self.control_key]\n        if bs is not None:\n            control = control[:bs]\n        control = control.to(self.device)\n        control = einops.rearrange(control, 'b h w c -> b c h w')\n        control = control.to(memory_format=torch.contiguous_format).float()\n        return x, dict(c_crossattn=[c], c_concat=[control])\n\n    def apply_model(self, x_noisy, t, cond, *args, **kwargs):\n        assert isinstance(cond, dict)\n        diffusion_model = self.model.diffusion_model\n\n        cond_txt = torch.cat(cond['c_crossattn'], 1)\n\n        if cond['c_concat'] is None:\n            eps = diffusion_model(x=x_noisy, timesteps=t, context=cond_txt, control=None, only_mid_control=self.only_mid_control)\n        else:\n            control = self.control_model(x=x_noisy, hint=torch.cat(cond['c_concat'], 1), timesteps=t, context=cond_txt)\n            control = [c * scale for c, scale in zip(control, self.control_scales)]\n            eps = diffusion_model(x=x_noisy, timesteps=t, context=cond_txt, control=control, only_mid_control=self.only_mid_control)\n\n        return eps\n\n    @torch.no_grad()\n    def get_unconditional_conditioning(self, N):\n        return self.get_learned_conditioning([\"\"] * N)\n\n    @torch.no_grad()\n    def log_images(self, batch, N=4, n_row=2, sample=False, ddim_steps=50, ddim_eta=0.0, return_keys=None,\n                   quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,\n                   plot_diffusion_rows=False, unconditional_guidance_scale=9.0, unconditional_guidance_label=None,\n                   use_ema_scope=True,\n                   **kwargs):\n        use_ddim = ddim_steps is not None\n\n        log = dict()\n        z, c = self.get_input(batch, self.first_stage_key, bs=N)\n        c_cat, c = c[\"c_concat\"][0][:N], c[\"c_crossattn\"][0][:N]\n        N = min(z.shape[0], N)\n        n_row = min(z.shape[0], n_row)\n        log[\"reconstruction\"] = self.decode_first_stage(z)\n        log[\"control\"] = c_cat * 2.0 - 1.0\n        log[\"conditioning\"] = log_txt_as_img((512, 512), batch[self.cond_stage_key], size=16)\n\n        if plot_diffusion_rows:\n            # get diffusion row\n            diffusion_row = list()\n            z_start = z[:n_row]\n            for t in range(self.num_timesteps):\n                if t % self.log_every_t == 0 or t == self.num_timesteps - 1:\n                    t = repeat(torch.tensor([t]), '1 -> b', b=n_row)\n                    t = t.to(self.device).long()\n                    noise = torch.randn_like(z_start)\n                    z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)\n                    diffusion_row.append(self.decode_first_stage(z_noisy))\n\n            diffusion_row = torch.stack(diffusion_row)  # n_log_step, n_row, C, H, W\n            diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')\n            diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')\n            diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])\n            log[\"diffusion_row\"] = diffusion_grid\n\n        if sample:\n            # get denoise row\n            samples, z_denoise_row = self.sample_log(cond={\"c_concat\": [c_cat], \"c_crossattn\": [c]},\n                                                     batch_size=N, ddim=use_ddim,\n                                                     ddim_steps=ddim_steps, eta=ddim_eta)\n            x_samples = self.decode_first_stage(samples)\n            log[\"samples\"] = x_samples\n            if plot_denoise_rows:\n                denoise_grid = self._get_denoise_row_from_list(z_denoise_row)\n                log[\"denoise_row\"] = denoise_grid\n\n        if unconditional_guidance_scale > 1.0:\n            uc_cross = self.get_unconditional_conditioning(N)\n            uc_cat = c_cat  # torch.zeros_like(c_cat)\n            uc_full = {\"c_concat\": [uc_cat], \"c_crossattn\": [uc_cross]}\n            samples_cfg, _ = self.sample_log(cond={\"c_concat\": [c_cat], \"c_crossattn\": [c]},\n                                             batch_size=N, ddim=use_ddim,\n                                             ddim_steps=ddim_steps, eta=ddim_eta,\n                                             unconditional_guidance_scale=unconditional_guidance_scale,\n                                             unconditional_conditioning=uc_full,\n                                             )\n            x_samples_cfg = self.decode_first_stage(samples_cfg)\n            log[f\"samples_cfg_scale_{unconditional_guidance_scale:.2f}\"] = x_samples_cfg\n\n        return log\n\n    @torch.no_grad()\n    def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):\n        ddim_sampler = DDIMSampler(self)\n        b, c, h, w = cond[\"c_concat\"][0].shape\n        shape = (self.channels, h // 8, w // 8)\n        samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, shape, cond, verbose=False, **kwargs)\n        return samples, intermediates\n\n    def configure_optimizers(self):\n        lr = self.learning_rate\n        params = list(self.control_model.parameters())\n        if not self.sd_locked:\n            params += list(self.model.diffusion_model.output_blocks.parameters())\n            params += list(self.model.diffusion_model.out.parameters())\n        opt = torch.optim.AdamW(params, lr=lr)\n        return opt\n\n    def low_vram_shift(self, is_diffusing):\n        if is_diffusing:\n            self.model = self.model.cuda()\n            self.control_model = self.control_model.cuda()\n            self.first_stage_model = self.first_stage_model.cpu()\n            self.cond_stage_model = self.cond_stage_model.cpu()\n        else:\n            self.model = self.model.cpu()\n            self.control_model = self.control_model.cpu()\n            self.first_stage_model = self.first_stage_model.cuda()\n            self.cond_stage_model = self.cond_stage_model.cuda()\n"
  },
  {
    "path": "ToonCrafter/cldm/ddim_hacked.py",
    "content": "\"\"\"SAMPLING ONLY.\"\"\"\n\nimport torch\nimport numpy as np\nfrom tqdm import tqdm\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor\n\n\nclass DDIMSampler(object):\n    def __init__(self, model, schedule=\"linear\", **kwargs):\n        super().__init__()\n        self.model = model\n        self.ddpm_num_timesteps = model.num_timesteps\n        self.schedule = schedule\n\n    def register_buffer(self, name, attr):\n        if type(attr) == torch.Tensor:\n            if attr.device != torch.device(\"cuda\"):\n                attr = attr.to(torch.device(\"cuda\"))\n        setattr(self, name, attr)\n\n    def make_schedule(self, ddim_num_steps, ddim_discretize=\"uniform\", ddim_eta=0., verbose=True):\n        self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,\n                                                  num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)\n        alphas_cumprod = self.model.alphas_cumprod\n        assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'\n        to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)\n\n        self.register_buffer('betas', to_torch(self.model.betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))\n\n        # ddim sampling parameters\n        ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),\n                                                                                   ddim_timesteps=self.ddim_timesteps,\n                                                                                   eta=ddim_eta,verbose=verbose)\n        self.register_buffer('ddim_sigmas', ddim_sigmas)\n        self.register_buffer('ddim_alphas', ddim_alphas)\n        self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)\n        self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))\n        sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(\n            (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (\n                        1 - self.alphas_cumprod / self.alphas_cumprod_prev))\n        self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)\n\n    @torch.no_grad()\n    def sample(self,\n               S,\n               batch_size,\n               shape,\n               conditioning=None,\n               callback=None,\n               normals_sequence=None,\n               img_callback=None,\n               quantize_x0=False,\n               eta=0.,\n               mask=None,\n               x0=None,\n               temperature=1.,\n               noise_dropout=0.,\n               score_corrector=None,\n               corrector_kwargs=None,\n               verbose=True,\n               x_T=None,\n               log_every_t=100,\n               unconditional_guidance_scale=1.,\n               unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...\n               dynamic_threshold=None,\n               ucg_schedule=None,\n               **kwargs\n               ):\n        if conditioning is not None:\n            if isinstance(conditioning, dict):\n                ctmp = conditioning[list(conditioning.keys())[0]]\n                while isinstance(ctmp, list): ctmp = ctmp[0]\n                cbs = ctmp.shape[0]\n                if cbs != batch_size:\n                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n\n            elif isinstance(conditioning, list):\n                for ctmp in conditioning:\n                    if ctmp.shape[0] != batch_size:\n                        print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n\n            else:\n                if conditioning.shape[0] != batch_size:\n                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n\n        self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)\n        # sampling\n        C, H, W = shape\n        size = (batch_size, C, H, W)\n        print(f'Data shape for DDIM sampling is {size}, eta {eta}')\n\n        samples, intermediates = self.ddim_sampling(conditioning, size,\n                                                    callback=callback,\n                                                    img_callback=img_callback,\n                                                    quantize_denoised=quantize_x0,\n                                                    mask=mask, x0=x0,\n                                                    ddim_use_original_steps=False,\n                                                    noise_dropout=noise_dropout,\n                                                    temperature=temperature,\n                                                    score_corrector=score_corrector,\n                                                    corrector_kwargs=corrector_kwargs,\n                                                    x_T=x_T,\n                                                    log_every_t=log_every_t,\n                                                    unconditional_guidance_scale=unconditional_guidance_scale,\n                                                    unconditional_conditioning=unconditional_conditioning,\n                                                    dynamic_threshold=dynamic_threshold,\n                                                    ucg_schedule=ucg_schedule\n                                                    )\n        return samples, intermediates\n\n    @torch.no_grad()\n    def ddim_sampling(self, cond, shape,\n                      x_T=None, ddim_use_original_steps=False,\n                      callback=None, timesteps=None, quantize_denoised=False,\n                      mask=None, x0=None, img_callback=None, log_every_t=100,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None,\n                      ucg_schedule=None):\n        device = self.model.betas.device\n        b = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n\n        if timesteps is None:\n            timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps\n        elif timesteps is not None and not ddim_use_original_steps:\n            subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1\n            timesteps = self.ddim_timesteps[:subset_end]\n\n        intermediates = {'x_inter': [img], 'pred_x0': [img]}\n        time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)\n        total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]\n        print(f\"Running DDIM Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)\n\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((b,), step, device=device, dtype=torch.long)\n\n            if mask is not None:\n                assert x0 is not None\n                img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass?\n                img = img_orig * mask + (1. - mask) * img\n\n            if ucg_schedule is not None:\n                assert len(ucg_schedule) == len(time_range)\n                unconditional_guidance_scale = ucg_schedule[i]\n\n            outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,\n                                      quantize_denoised=quantize_denoised, temperature=temperature,\n                                      noise_dropout=noise_dropout, score_corrector=score_corrector,\n                                      corrector_kwargs=corrector_kwargs,\n                                      unconditional_guidance_scale=unconditional_guidance_scale,\n                                      unconditional_conditioning=unconditional_conditioning,\n                                      dynamic_threshold=dynamic_threshold)\n            img, pred_x0 = outs\n            if callback: callback(i)\n            if img_callback: img_callback(pred_x0, i)\n\n            if index % log_every_t == 0 or index == total_steps - 1:\n                intermediates['x_inter'].append(img)\n                intermediates['pred_x0'].append(pred_x0)\n\n        return img, intermediates\n\n    @torch.no_grad()\n    def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None,\n                      dynamic_threshold=None):\n        b, *_, device = *x.shape, x.device\n\n        if unconditional_conditioning is None or unconditional_guidance_scale == 1.:\n            model_output = self.model.apply_model(x, t, c)\n        else:\n            model_t = self.model.apply_model(x, t, c)\n            model_uncond = self.model.apply_model(x, t, unconditional_conditioning)\n            model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond)\n\n        if self.model.parameterization == \"v\":\n            e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)\n        else:\n            e_t = model_output\n\n        if score_corrector is not None:\n            assert self.model.parameterization == \"eps\", 'not implemented'\n            e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)\n\n        alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas\n        alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev\n        sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas\n        sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n        # select parameters corresponding to the currently considered timestep\n        a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)\n        a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)\n        sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)\n        sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)\n\n        # current prediction for x_0\n        if self.model.parameterization != \"v\":\n            pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()\n        else:\n            pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)\n\n        if quantize_denoised:\n            pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)\n\n        if dynamic_threshold is not None:\n            raise NotImplementedError()\n\n        # direction pointing to x_t\n        dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t\n        noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature\n        if noise_dropout > 0.:\n            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n        x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise\n        return x_prev, pred_x0\n\n    @torch.no_grad()\n    def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None,\n               unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None):\n        timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps\n        num_reference_steps = timesteps.shape[0]\n\n        assert t_enc <= num_reference_steps\n        num_steps = t_enc\n\n        if use_original_steps:\n            alphas_next = self.alphas_cumprod[:num_steps]\n            alphas = self.alphas_cumprod_prev[:num_steps]\n        else:\n            alphas_next = self.ddim_alphas[:num_steps]\n            alphas = torch.tensor(self.ddim_alphas_prev[:num_steps])\n\n        x_next = x0\n        intermediates = []\n        inter_steps = []\n        for i in tqdm(range(num_steps), desc='Encoding Image'):\n            t = torch.full((x0.shape[0],), timesteps[i], device=self.model.device, dtype=torch.long)\n            if unconditional_guidance_scale == 1.:\n                noise_pred = self.model.apply_model(x_next, t, c)\n            else:\n                assert unconditional_conditioning is not None\n                e_t_uncond, noise_pred = torch.chunk(\n                    self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)),\n                                           torch.cat((unconditional_conditioning, c))), 2)\n                noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond)\n\n            xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next\n            weighted_noise_pred = alphas_next[i].sqrt() * (\n                    (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred\n            x_next = xt_weighted + weighted_noise_pred\n            if return_intermediates and i % (\n                    num_steps // return_intermediates) == 0 and i < num_steps - 1:\n                intermediates.append(x_next)\n                inter_steps.append(i)\n            elif return_intermediates and i >= num_steps - 2:\n                intermediates.append(x_next)\n                inter_steps.append(i)\n            if callback: callback(i)\n\n        out = {'x_encoded': x_next, 'intermediate_steps': inter_steps}\n        if return_intermediates:\n            out.update({'intermediates': intermediates})\n        return x_next, out\n\n    @torch.no_grad()\n    def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):\n        # fast, but does not allow for exact reconstruction\n        # t serves as an index to gather the correct alphas\n        if use_original_steps:\n            sqrt_alphas_cumprod = self.sqrt_alphas_cumprod\n            sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod\n        else:\n            sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)\n            sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas\n\n        if noise is None:\n            noise = torch.randn_like(x0)\n        return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +\n                extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)\n\n    @torch.no_grad()\n    def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,\n               use_original_steps=False, callback=None):\n\n        timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps\n        timesteps = timesteps[:t_start]\n\n        time_range = np.flip(timesteps)\n        total_steps = timesteps.shape[0]\n        print(f\"Running DDIM Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='Decoding image', total=total_steps)\n        x_dec = x_latent\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)\n            x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,\n                                          unconditional_guidance_scale=unconditional_guidance_scale,\n                                          unconditional_conditioning=unconditional_conditioning)\n            if callback: callback(i)\n        return x_dec\n"
  },
  {
    "path": "ToonCrafter/cldm/hack.py",
    "content": "import torch\nimport einops\n\nimport ldm.modules.encoders.modules\nimport ldm.modules.attention\n\nfrom transformers import logging\nfrom ToonCrafter.ldm.modules.attention import default\n\n\ndef disable_verbosity():\n    logging.set_verbosity_error()\n    print('logging improved.')\n    return\n\n\ndef enable_sliced_attention():\n    ldm.modules.attention.CrossAttention.forward = _hacked_sliced_attentin_forward\n    print('Enabled sliced_attention.')\n    return\n\n\ndef hack_everything(clip_skip=0):\n    disable_verbosity()\n    ldm.modules.encoders.modules.FrozenCLIPEmbedder.forward = _hacked_clip_forward\n    ldm.modules.encoders.modules.FrozenCLIPEmbedder.clip_skip = clip_skip\n    print('Enabled clip hacks.')\n    return\n\n\n# Written by Lvmin\ndef _hacked_clip_forward(self, text):\n    PAD = self.tokenizer.pad_token_id\n    EOS = self.tokenizer.eos_token_id\n    BOS = self.tokenizer.bos_token_id\n\n    def tokenize(t):\n        return self.tokenizer(t, truncation=False, add_special_tokens=False)[\"input_ids\"]\n\n    def transformer_encode(t):\n        if self.clip_skip > 1:\n            rt = self.transformer(input_ids=t, output_hidden_states=True)\n            return self.transformer.text_model.final_layer_norm(rt.hidden_states[-self.clip_skip])\n        else:\n            return self.transformer(input_ids=t, output_hidden_states=False).last_hidden_state\n\n    def split(x):\n        return x[75 * 0: 75 * 1], x[75 * 1: 75 * 2], x[75 * 2: 75 * 3]\n\n    def pad(x, p, i):\n        return x[:i] if len(x) >= i else x + [p] * (i - len(x))\n\n    raw_tokens_list = tokenize(text)\n    tokens_list = []\n\n    for raw_tokens in raw_tokens_list:\n        raw_tokens_123 = split(raw_tokens)\n        raw_tokens_123 = [[BOS] + raw_tokens_i + [EOS] for raw_tokens_i in raw_tokens_123]\n        raw_tokens_123 = [pad(raw_tokens_i, PAD, 77) for raw_tokens_i in raw_tokens_123]\n        tokens_list.append(raw_tokens_123)\n\n    tokens_list = torch.IntTensor(tokens_list).to(self.device)\n\n    feed = einops.rearrange(tokens_list, 'b f i -> (b f) i')\n    y = transformer_encode(feed)\n    z = einops.rearrange(y, '(b f) i c -> b (f i) c', f=3)\n\n    return z\n\n\n# Stolen from https://github.com/basujindal/stable-diffusion/blob/main/optimizedSD/splitAttention.py\ndef _hacked_sliced_attentin_forward(self, x, context=None, mask=None):\n    h = self.heads\n\n    q = self.to_q(x)\n    context = default(context, x)\n    k = self.to_k(context)\n    v = self.to_v(context)\n    del context, x\n\n    q, k, v = map(lambda t: einops.rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))\n\n    limit = k.shape[0]\n    att_step = 1\n    q_chunks = list(torch.tensor_split(q, limit // att_step, dim=0))\n    k_chunks = list(torch.tensor_split(k, limit // att_step, dim=0))\n    v_chunks = list(torch.tensor_split(v, limit // att_step, dim=0))\n\n    q_chunks.reverse()\n    k_chunks.reverse()\n    v_chunks.reverse()\n    sim = torch.zeros(q.shape[0], q.shape[1], v.shape[2], device=q.device)\n    del k, q, v\n    for i in range(0, limit, att_step):\n        q_buffer = q_chunks.pop()\n        k_buffer = k_chunks.pop()\n        v_buffer = v_chunks.pop()\n        sim_buffer = torch.einsum('b i d, b j d -> b i j', q_buffer, k_buffer) * self.scale\n\n        del k_buffer, q_buffer\n        # attention, what we cannot get enough of, by chunks\n\n        sim_buffer = sim_buffer.softmax(dim=-1)\n\n        sim_buffer = torch.einsum('b i j, b j d -> b i d', sim_buffer, v_buffer)\n        del v_buffer\n        sim[i:i + att_step, :, :] = sim_buffer\n\n        del sim_buffer\n    sim = einops.rearrange(sim, '(b h) n d -> b n (h d)', h=h)\n    return self.to_out(sim)\n"
  },
  {
    "path": "ToonCrafter/cldm/logger.py",
    "content": "import os\n\nimport numpy as np\nimport torch\nimport torchvision\nfrom PIL import Image\nfrom pytorch_lightning.callbacks import Callback\nfrom pytorch_lightning.utilities.distributed import rank_zero_only\n\n\nclass ImageLogger(Callback):\n    def __init__(self, batch_frequency=2000, max_images=4, clamp=True, increase_log_steps=True,\n                 rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False,\n                 log_images_kwargs=None):\n        super().__init__()\n        self.rescale = rescale\n        self.batch_freq = batch_frequency\n        self.max_images = max_images\n        if not increase_log_steps:\n            self.log_steps = [self.batch_freq]\n        self.clamp = clamp\n        self.disabled = disabled\n        self.log_on_batch_idx = log_on_batch_idx\n        self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}\n        self.log_first_step = log_first_step\n\n    @rank_zero_only\n    def log_local(self, save_dir, split, images, global_step, current_epoch, batch_idx):\n        root = os.path.join(save_dir, \"image_log\", split)\n        for k in images:\n            grid = torchvision.utils.make_grid(images[k], nrow=4)\n            if self.rescale:\n                grid = (grid + 1.0) / 2.0  # -1,1 -> 0,1; c,h,w\n            grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)\n            grid = grid.numpy()\n            grid = (grid * 255).astype(np.uint8)\n            filename = \"{}_gs-{:06}_e-{:06}_b-{:06}.png\".format(k, global_step, current_epoch, batch_idx)\n            path = os.path.join(root, filename)\n            os.makedirs(os.path.split(path)[0], exist_ok=True)\n            Image.fromarray(grid).save(path)\n\n    def log_img(self, pl_module, batch, batch_idx, split=\"train\"):\n        check_idx = batch_idx  # if self.log_on_batch_idx else pl_module.global_step\n        if (self.check_frequency(check_idx) and  # batch_idx % self.batch_freq == 0\n                hasattr(pl_module, \"log_images\") and\n                callable(pl_module.log_images) and\n                self.max_images > 0):\n            logger = type(pl_module.logger)\n\n            is_train = pl_module.training\n            if is_train:\n                pl_module.eval()\n\n            with torch.no_grad():\n                images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)\n\n            for k in images:\n                N = min(images[k].shape[0], self.max_images)\n                images[k] = images[k][:N]\n                if isinstance(images[k], torch.Tensor):\n                    images[k] = images[k].detach().cpu()\n                    if self.clamp:\n                        images[k] = torch.clamp(images[k], -1., 1.)\n\n            self.log_local(pl_module.logger.save_dir, split, images,\n                           pl_module.global_step, pl_module.current_epoch, batch_idx)\n\n            if is_train:\n                pl_module.train()\n\n    def check_frequency(self, check_idx):\n        return check_idx % self.batch_freq == 0\n\n    def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):\n        if not self.disabled:\n            self.log_img(pl_module, batch, batch_idx, split=\"train\")\n"
  },
  {
    "path": "ToonCrafter/cldm/model.py",
    "content": "import os\nimport torch\n\nfrom omegaconf import OmegaConf\nfrom comfy.ldm.util import instantiate_from_config\n\n\ndef get_state_dict(d):\n    return d.get('state_dict', d)\n\n\ndef load_state_dict(ckpt_path, location='cpu'):\n    _, extension = os.path.splitext(ckpt_path)\n    if extension.lower() == \".safetensors\":\n        import safetensors.torch\n        state_dict = safetensors.torch.load_file(ckpt_path, device=location)\n    else:\n        state_dict = get_state_dict(torch.load(ckpt_path, map_location=torch.device(location)))\n    state_dict = get_state_dict(state_dict)\n    print(f'Loaded state_dict from [{ckpt_path}]')\n    return state_dict\n\n\ndef create_model(config_path):\n    config = OmegaConf.load(config_path)\n    model = instantiate_from_config(config.model).cpu()\n    print(f'Loaded model config from [{config_path}]')\n    return model\n"
  },
  {
    "path": "ToonCrafter/configs/cldm_v21.yaml",
    "content": "control_stage_config:\n  target: ToonCrafter.cldm.cldm.ControlNet\n  params:\n    use_checkpoint: True\n    image_size: 32 # unused\n    in_channels: 4\n    hint_channels: 1\n    model_channels: 320\n    attention_resolutions: [ 4, 2, 1 ]\n    num_res_blocks: 2\n    channel_mult: [ 1, 2, 4, 4 ]\n    num_head_channels: 64 # need to fix for flash-attn\n    use_spatial_transformer: True\n    use_linear_in_transformer: True\n    transformer_depth: 1\n    context_dim: 1024\n    legacy: False\n"
  },
  {
    "path": "ToonCrafter/configs/inference_512_v1.0.yaml",
    "content": "model:\n  target: lvdm.models.ddpm3d.LatentVisualDiffusion\n  params:\n    rescale_betas_zero_snr: True\n    parameterization: \"v\"\n    linear_start: 0.00085\n    linear_end: 0.012\n    num_timesteps_cond: 1\n    timesteps: 1000\n    first_stage_key: video\n    cond_stage_key: caption\n    cond_stage_trainable: False\n    conditioning_key: hybrid\n    image_size: [40, 64]\n    channels: 4\n    scale_by_std: False\n    scale_factor: 0.18215\n    use_ema: False\n    uncond_type: 'empty_seq'\n    use_dynamic_rescale: true\n    base_scale: 0.7\n    fps_condition_type: 'fps'\n    perframe_ae: True\n    loop_video: true\n    unet_config:\n      target: lvdm.modules.networks.openaimodel3d.UNetModel\n      params:\n        in_channels: 8\n        out_channels: 4\n        model_channels: 320\n        attention_resolutions:\n        - 4\n        - 2\n        - 1\n        num_res_blocks: 2\n        channel_mult:\n        - 1\n        - 2\n        - 4\n        - 4\n        dropout: 0.1\n        num_head_channels: 64\n        transformer_depth: 1\n        context_dim: 1024\n        use_linear: true\n        use_checkpoint: True\n        temporal_conv: True\n        temporal_attention: True\n        temporal_selfatt_only: true\n        use_relative_position: false\n        use_causal_attention: False\n        temporal_length: 16\n        addition_attention: true\n        image_cross_attention: true\n        default_fs: 24\n        fs_condition: true\n\n    first_stage_config:\n      target: lvdm.models.autoencoder.AutoencoderKL_Dualref\n      params:\n        embed_dim: 4\n        monitor: val/rec_loss\n        ddconfig:\n          double_z: True\n          z_channels: 4\n          resolution: 256\n          in_channels: 3\n          out_ch: 3\n          ch: 128\n          ch_mult:\n          - 1\n          - 2\n          - 4\n          - 4\n          num_res_blocks: 2\n          attn_resolutions: []\n          dropout: 0.0\n        lossconfig:\n          target: torch.nn.Identity\n\n    cond_stage_config:\n      target: lvdm.modules.encoders.condition.FrozenOpenCLIPEmbedder\n      params:\n        freeze: true\n        layer: \"penultimate\"\n\n    img_cond_stage_config:\n      target: lvdm.modules.encoders.condition.FrozenOpenCLIPImageEmbedderV2\n      params:\n        freeze: true\n    \n    image_proj_stage_config:\n      target: lvdm.modules.encoders.resampler.Resampler\n      params:\n        dim: 1024\n        depth: 4\n        dim_head: 64\n        heads: 12\n        num_queries: 16\n        embedding_dim: 1280\n        output_dim: 1024\n        ff_mult: 4\n        video_length: 16\n"
  },
  {
    "path": "ToonCrafter/configs/training_1024_v1.0/config.yaml",
    "content": "model:\n  pretrained_checkpoint: checkpoints/dynamicrafter_1024_v1/model.ckpt\n  base_learning_rate: 1.0e-05\n  scale_lr: False\n  target: lvdm.models.ddpm3d.LatentVisualDiffusion\n  params:\n    rescale_betas_zero_snr: True\n    parameterization: \"v\"\n    linear_start: 0.00085\n    linear_end: 0.012\n    num_timesteps_cond: 1\n    log_every_t: 200\n    timesteps: 1000\n    first_stage_key: video\n    cond_stage_key: caption\n    cond_stage_trainable: False\n    image_proj_model_trainable: True\n    conditioning_key: hybrid\n    image_size: [72, 128]\n    channels: 4\n    scale_by_std: False\n    scale_factor: 0.18215\n    use_ema: False\n    uncond_prob: 0.05\n    uncond_type: 'empty_seq'\n    rand_cond_frame: true\n    use_dynamic_rescale: true\n    base_scale: 0.3\n    fps_condition_type: 'fps'\n    perframe_ae: True\n\n    unet_config:\n      target: lvdm.modules.networks.openaimodel3d.UNetModel\n      params:\n        in_channels: 8\n        out_channels: 4\n        model_channels: 320\n        attention_resolutions:\n        - 4\n        - 2\n        - 1\n        num_res_blocks: 2\n        channel_mult:\n        - 1\n        - 2\n        - 4\n        - 4\n        dropout: 0.1\n        num_head_channels: 64\n        transformer_depth: 1\n        context_dim: 1024\n        use_linear: true\n        use_checkpoint: True\n        temporal_conv: True\n        temporal_attention: True\n        temporal_selfatt_only: true\n        use_relative_position: false\n        use_causal_attention: False\n        temporal_length: 16\n        addition_attention: true\n        image_cross_attention: true\n        default_fs: 10\n        fs_condition: true\n\n    first_stage_config:\n      target: lvdm.models.autoencoder.AutoencoderKL\n      params:\n        embed_dim: 4\n        monitor: val/rec_loss\n        ddconfig:\n          double_z: True\n          z_channels: 4\n          resolution: 256\n          in_channels: 3\n          out_ch: 3\n          ch: 128\n          ch_mult:\n          - 1\n          - 2\n          - 4\n          - 4\n          num_res_blocks: 2\n          attn_resolutions: []\n          dropout: 0.0\n        lossconfig:\n          target: torch.nn.Identity\n\n    cond_stage_config:\n      target: lvdm.modules.encoders.condition.FrozenOpenCLIPEmbedder\n      params:\n        freeze: true\n        layer: \"penultimate\"\n\n    img_cond_stage_config:\n      target: lvdm.modules.encoders.condition.FrozenOpenCLIPImageEmbedderV2\n      params:\n        freeze: true\n    \n    image_proj_stage_config:\n      target: lvdm.modules.encoders.resampler.Resampler\n      params:\n        dim: 1024\n        depth: 4\n        dim_head: 64\n        heads: 12\n        num_queries: 16\n        embedding_dim: 1280\n        output_dim: 1024\n        ff_mult: 4\n        video_length: 16\n\ndata:\n  target: utils_data.DataModuleFromConfig\n  params:\n    batch_size: 1\n    num_workers: 12\n    wrap: false\n    train:\n      target: lvdm.data.webvid.WebVid\n      params:\n        data_dir: <WebVid10M DATA>\n        meta_path: <.csv FILE>\n        video_length: 16\n        frame_stride: 6\n        load_raw_resolution: true\n        resolution: [576, 1024]\n        spatial_transform: resize_center_crop\n        random_fs: true  ## if true, we uniformly sample fs with max_fs=frame_stride (above)\n\nlightning:\n  precision: 16\n  # strategy: deepspeed_stage_2\n  trainer:\n    benchmark: True\n    accumulate_grad_batches: 2\n    max_steps: 100000\n    # logger\n    log_every_n_steps: 50\n    # val\n    val_check_interval: 0.5\n    gradient_clip_algorithm: 'norm'\n    gradient_clip_val: 0.5\n  callbacks:\n    model_checkpoint:\n      target: pytorch_lightning.callbacks.ModelCheckpoint\n      params:\n        every_n_train_steps: 9000 #1000\n        filename: \"{epoch}-{step}\"\n        save_weights_only: True\n    metrics_over_trainsteps_checkpoint:\n      target: pytorch_lightning.callbacks.ModelCheckpoint\n      params:\n        filename: '{epoch}-{step}'\n        save_weights_only: True\n        every_n_train_steps: 10000 #20000 # 3s/step*2w=\n    batch_logger:\n      target: callbacks.ImageLogger\n      params:\n        batch_frequency: 500\n        to_local: False\n        max_images: 8\n        log_images_kwargs:\n          ddim_steps: 50\n          unconditional_guidance_scale: 7.5\n          timestep_spacing: uniform_trailing\n          guidance_rescale: 0.7"
  },
  {
    "path": "ToonCrafter/configs/training_1024_v1.0/run.sh",
    "content": "# NCCL configuration\n# export NCCL_DEBUG=INFO\n# export NCCL_IB_DISABLE=0\n# export NCCL_IB_GID_INDEX=3\n# export NCCL_NET_GDR_LEVEL=3\n# export NCCL_TOPO_FILE=/tmp/topo.txt\n\n# args\nname=\"training_1024_v1.0\"\nconfig_file=configs/${name}/config.yaml\n\n# save root dir for logs, checkpoints, tensorboard record, etc.\nsave_root=\"<YOUR_SAVE_ROOT_DIR>\"\n\nmkdir -p $save_root/$name\n\n## run\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch \\\n--nproc_per_node=$HOST_GPU_NUM --nnodes=1 --master_addr=127.0.0.1 --master_port=12352 --node_rank=0 \\\n./main/trainer.py \\\n--base $config_file \\\n--train \\\n--name $name \\\n--logdir $save_root \\\n--devices $HOST_GPU_NUM \\\nlightning.trainer.num_nodes=1\n\n## debugging\n# CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch \\\n# --nproc_per_node=4 --nnodes=1 --master_addr=127.0.0.1 --master_port=12352 --node_rank=0 \\\n# ./main/trainer.py \\\n# --base $config_file \\\n# --train \\\n# --name $name \\\n# --logdir $save_root \\\n# --devices 4 \\\n# lightning.trainer.num_nodes=1"
  },
  {
    "path": "ToonCrafter/configs/training_512_v1.0/config.yaml",
    "content": "model:\n  pretrained_checkpoint: checkpoints/dynamicrafter_512_v1/model.ckpt\n  base_learning_rate: 1.0e-05\n  scale_lr: False\n  target: lvdm.models.ddpm3d.LatentVisualDiffusion\n  params:\n    rescale_betas_zero_snr: True\n    parameterization: \"v\"\n    linear_start: 0.00085\n    linear_end: 0.012\n    num_timesteps_cond: 1\n    log_every_t: 200\n    timesteps: 1000\n    first_stage_key: video\n    cond_stage_key: caption\n    cond_stage_trainable: False\n    image_proj_model_trainable: True\n    conditioning_key: hybrid\n    image_size: [40, 64]\n    channels: 4\n    scale_by_std: False\n    scale_factor: 0.18215\n    use_ema: False\n    uncond_prob: 0.05\n    uncond_type: 'empty_seq'\n    rand_cond_frame: true\n    use_dynamic_rescale: true\n    base_scale: 0.7\n    fps_condition_type: 'fps'\n    perframe_ae: True\n\n    unet_config:\n      target: lvdm.modules.networks.openaimodel3d.UNetModel\n      params:\n        in_channels: 8\n        out_channels: 4\n        model_channels: 320\n        attention_resolutions:\n        - 4\n        - 2\n        - 1\n        num_res_blocks: 2\n        channel_mult:\n        - 1\n        - 2\n        - 4\n        - 4\n        dropout: 0.1\n        num_head_channels: 64\n        transformer_depth: 1\n        context_dim: 1024\n        use_linear: true\n        use_checkpoint: True\n        temporal_conv: True\n        temporal_attention: True\n        temporal_selfatt_only: true\n        use_relative_position: false\n        use_causal_attention: False\n        temporal_length: 16\n        addition_attention: true\n        image_cross_attention: true\n        default_fs: 10\n        fs_condition: true\n\n    first_stage_config:\n      target: lvdm.models.autoencoder.AutoencoderKL\n      params:\n        embed_dim: 4\n        monitor: val/rec_loss\n        ddconfig:\n          double_z: True\n          z_channels: 4\n          resolution: 256\n          in_channels: 3\n          out_ch: 3\n          ch: 128\n          ch_mult:\n          - 1\n          - 2\n          - 4\n          - 4\n          num_res_blocks: 2\n          attn_resolutions: []\n          dropout: 0.0\n        lossconfig:\n          target: torch.nn.Identity\n\n    cond_stage_config:\n      target: lvdm.modules.encoders.condition.FrozenOpenCLIPEmbedder\n      params:\n        freeze: true\n        layer: \"penultimate\"\n\n    img_cond_stage_config:\n      target: lvdm.modules.encoders.condition.FrozenOpenCLIPImageEmbedderV2\n      params:\n        freeze: true\n    \n    image_proj_stage_config:\n      target: lvdm.modules.encoders.resampler.Resampler\n      params:\n        dim: 1024\n        depth: 4\n        dim_head: 64\n        heads: 12\n        num_queries: 16\n        embedding_dim: 1280\n        output_dim: 1024\n        ff_mult: 4\n        video_length: 16\n\ndata:\n  target: utils_data.DataModuleFromConfig\n  params:\n    batch_size: 2\n    num_workers: 12\n    wrap: false\n    train:\n      target: lvdm.data.webvid.WebVid\n      params:\n        data_dir: <WebVid10M DATA>\n        meta_path: <.csv FILE>\n        video_length: 16\n        frame_stride: 6\n        load_raw_resolution: true\n        resolution: [320, 512]\n        spatial_transform: resize_center_crop\n        random_fs: true  ## if true, we uniformly sample fs with max_fs=frame_stride (above)\n\nlightning:\n  precision: 16\n  # strategy: deepspeed_stage_2\n  trainer:\n    benchmark: True\n    accumulate_grad_batches: 2\n    max_steps: 100000\n    # logger\n    log_every_n_steps: 50\n    # val\n    val_check_interval: 0.5\n    gradient_clip_algorithm: 'norm'\n    gradient_clip_val: 0.5\n  callbacks:\n    model_checkpoint:\n      target: pytorch_lightning.callbacks.ModelCheckpoint\n      params:\n        every_n_train_steps: 9000 #1000\n        filename: \"{epoch}-{step}\"\n        save_weights_only: True\n    metrics_over_trainsteps_checkpoint:\n      target: pytorch_lightning.callbacks.ModelCheckpoint\n      params:\n        filename: '{epoch}-{step}'\n        save_weights_only: True\n        every_n_train_steps: 10000 #20000 # 3s/step*2w=\n    batch_logger:\n      target: callbacks.ImageLogger\n      params:\n        batch_frequency: 500\n        to_local: False\n        max_images: 8\n        log_images_kwargs:\n          ddim_steps: 50\n          unconditional_guidance_scale: 7.5\n          timestep_spacing: uniform_trailing\n          guidance_rescale: 0.7"
  },
  {
    "path": "ToonCrafter/configs/training_512_v1.0/run.sh",
    "content": "# NCCL configuration\n# export NCCL_DEBUG=INFO\n# export NCCL_IB_DISABLE=0\n# export NCCL_IB_GID_INDEX=3\n# export NCCL_NET_GDR_LEVEL=3\n# export NCCL_TOPO_FILE=/tmp/topo.txt\n\n# args\nname=\"training_512_v1.0\"\nconfig_file=configs/${name}/config.yaml\n\n# save root dir for logs, checkpoints, tensorboard record, etc.\nsave_root=\"<YOUR_SAVE_ROOT_DIR>\"\n\nmkdir -p $save_root/$name\n\n## run\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch \\\n--nproc_per_node=$HOST_GPU_NUM --nnodes=1 --master_addr=127.0.0.1 --master_port=12352 --node_rank=0 \\\n./main/trainer.py \\\n--base $config_file \\\n--train \\\n--name $name \\\n--logdir $save_root \\\n--devices $HOST_GPU_NUM \\\nlightning.trainer.num_nodes=1\n\n## debugging\n# CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch \\\n# --nproc_per_node=4 --nnodes=1 --master_addr=127.0.0.1 --master_port=12352 --node_rank=0 \\\n# ./main/trainer.py \\\n# --base $config_file \\\n# --train \\\n# --name $name \\\n# --logdir $save_root \\\n# --devices 4 \\\n# lightning.trainer.num_nodes=1"
  },
  {
    "path": "ToonCrafter/gradio_app.py",
    "content": "import os\nimport argparse\nimport sys\nimport gradio as gr\nfrom scripts.gradio.i2v_test_application import Image2Video\nsys.path.insert(1, os.path.join(sys.path[0], 'lvdm'))\n\n\ni2v_examples_interp_512 = [\n    ['prompts/512_interp/74906_1462_frame1.png', 'walking man', 50, 7.5, 1.0, 10, 123, 'prompts/512_interp/74906_1462_frame3.png'],\n    ['prompts/512_interp/Japan_v2_2_062266_s2_frame1.png', 'an anime scene', 50, 7.5, 1.0, 10, 789, 'prompts/512_interp/Japan_v2_2_062266_s2_frame3.png'],\n    ['prompts/512_interp/Japan_v2_3_119235_s2_frame1.png', 'an anime scene', 50, 7.5, 1.0, 10, 123, 'prompts/512_interp/Japan_v2_3_119235_s2_frame3.png'],\n]\n\n\ndef dynamicrafter_demo(result_dir='./tmp/', res=512):\n    if res == 1024:\n        resolution = '576_1024'\n        css = \"\"\"#input_img {max-width: 1024px !important} #output_vid {max-width: 1024px; max-height:576px}\"\"\"\n    elif res == 512:\n        resolution = '320_512'\n        css = \"\"\"#input_img {max-width: 512px !important} #output_vid {max-width: 512px; max-height: 320px} #input_img2 {max-width: 512px !important} #output_vid {max-width: 512px; max-height: 320px}\"\"\"\n    elif res == 256:\n        resolution = '256_256'\n        css = \"\"\"#input_img {max-width: 256px !important} #output_vid {max-width: 256px; max-height: 256px}\"\"\"\n    else:\n        raise NotImplementedError(f\"Unsupported resolution: {res}\")\n    image2video = Image2Video(result_dir, resolution=resolution)\n    with gr.Blocks(analytics_enabled=False, css=css) as dynamicrafter_iface:\n\n        with gr.Tab(label='ToonCrafter_320x512'):\n            with gr.Column():\n                with gr.Row():\n                    with gr.Column():\n                        with gr.Row():\n                            i2v_input_image = gr.Image(label=\"Input Image1\", elem_id=\"input_img\")\n                        with gr.Row():\n                            i2v_input_text = gr.Text(label='Prompts')\n                        with gr.Row():\n                            i2v_seed = gr.Slider(label='Random Seed', minimum=0, maximum=50000, step=1, value=123)\n                            i2v_eta = gr.Slider(minimum=0.0, maximum=1.0, step=0.1, label='ETA', value=1.0, elem_id=\"i2v_eta\")\n                            i2v_cfg_scale = gr.Slider(minimum=1.0, maximum=15.0, step=0.5, label='CFG Scale', value=7.5, elem_id=\"i2v_cfg_scale\")\n                        with gr.Row():\n                            i2v_steps = gr.Slider(minimum=1, maximum=60, step=1, elem_id=\"i2v_steps\", label=\"Sampling steps\", value=50)\n                            i2v_motion = gr.Slider(minimum=5, maximum=30, step=1, elem_id=\"i2v_motion\", label=\"FPS\", value=10)\n                        i2v_end_btn = gr.Button(\"Generate\")\n                    with gr.Column():\n                        with gr.Row():\n                            i2v_input_image2 = gr.Image(label=\"Input Image2\", elem_id=\"input_img2\")\n                        with gr.Row():\n                            i2v_output_video = gr.Video(label=\"Generated Video\", elem_id=\"output_vid\", autoplay=True, show_share_button=True)\n\n                gr.Examples(examples=i2v_examples_interp_512,\n                            inputs=[i2v_input_image, i2v_input_text, i2v_steps, i2v_cfg_scale, i2v_eta, i2v_motion, i2v_seed, i2v_input_image2],\n                            outputs=[i2v_output_video],\n                            fn=image2video.get_image,\n                            cache_examples=False,\n                            )\n            i2v_end_btn.click(inputs=[i2v_input_image, i2v_input_text, i2v_steps, i2v_cfg_scale, i2v_eta, i2v_motion, i2v_seed, i2v_input_image2],\n                              outputs=[i2v_output_video],\n                              fn=image2video.get_image\n                              )\n\n    return dynamicrafter_iface\n\n\ndef get_parser():\n    parser = argparse.ArgumentParser()\n    return parser\n\n\nif __name__ == \"__main__\":\n    parser = get_parser()\n    args = parser.parse_args()\n\n    result_dir = os.path.join('./', 'results')\n    dynamicrafter_iface = dynamicrafter_demo(result_dir)\n    dynamicrafter_iface.queue(max_size=12)\n    dynamicrafter_iface.launch(max_threads=1)\n    # dynamicrafter_iface.launch(server_name='0.0.0.0', server_port=80, max_threads=1)\n"
  },
  {
    "path": "ToonCrafter/ldm/data/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/data/util.py",
    "content": "import torch\n\nfrom ToonCrafter.ldm.modules.midas.api import load_midas_transform\n\n\nclass AddMiDaS(object):\n    def __init__(self, model_type):\n        super().__init__()\n        self.transform = load_midas_transform(model_type)\n\n    def pt2np(self, x):\n        x = ((x + 1.0) * .5).detach().cpu().numpy()\n        return x\n\n    def np2pt(self, x):\n        x = torch.from_numpy(x) * 2 - 1.\n        return x\n\n    def __call__(self, sample):\n        # sample['jpg'] is tensor hwc in [-1, 1] at this point\n        x = self.pt2np(sample['jpg'])\n        x = self.transform({\"image\": x})[\"image\"]\n        sample['midas_in'] = x\n        return sample"
  },
  {
    "path": "ToonCrafter/ldm/models/autoencoder.py",
    "content": "import torch\nimport pytorch_lightning as pl\nimport torch.nn.functional as F\nfrom contextlib import contextmanager\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.model import Encoder, Decoder\nfrom ToonCrafter.ldm.modules.distributions.distributions import DiagonalGaussianDistribution\n\nfrom ToonCrafter.ldm.util import instantiate_from_config\nfrom ToonCrafter.ldm.modules.ema import LitEma\n\n\nclass AutoencoderKL(pl.LightningModule):\n    def __init__(self,\n                 ddconfig,\n                 lossconfig,\n                 embed_dim,\n                 ckpt_path=None,\n                 ignore_keys=[],\n                 image_key=\"image\",\n                 colorize_nlabels=None,\n                 monitor=None,\n                 ema_decay=None,\n                 learn_logvar=False\n                 ):\n        super().__init__()\n        self.learn_logvar = learn_logvar\n        self.image_key = image_key\n        self.encoder = Encoder(**ddconfig)\n        self.decoder = Decoder(**ddconfig)\n        self.loss = instantiate_from_config(lossconfig)\n        assert ddconfig[\"double_z\"]\n        self.quant_conv = torch.nn.Conv2d(2*ddconfig[\"z_channels\"], 2*embed_dim, 1)\n        self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig[\"z_channels\"], 1)\n        self.embed_dim = embed_dim\n        if colorize_nlabels is not None:\n            assert type(colorize_nlabels)==int\n            self.register_buffer(\"colorize\", torch.randn(3, colorize_nlabels, 1, 1))\n        if monitor is not None:\n            self.monitor = monitor\n\n        self.use_ema = ema_decay is not None\n        if self.use_ema:\n            self.ema_decay = ema_decay\n            assert 0. < ema_decay < 1.\n            self.model_ema = LitEma(self, decay=ema_decay)\n            print(f\"Keeping EMAs of {len(list(self.model_ema.buffers()))}.\")\n\n        if ckpt_path is not None:\n            self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)\n\n    def init_from_ckpt(self, path, ignore_keys=list()):\n        sd = torch.load(path, map_location=\"cpu\")[\"state_dict\"]\n        keys = list(sd.keys())\n        for k in keys:\n            for ik in ignore_keys:\n                if k.startswith(ik):\n                    print(\"Deleting key {} from state_dict.\".format(k))\n                    del sd[k]\n        self.load_state_dict(sd, strict=False)\n        print(f\"Restored from {path}\")\n\n    @contextmanager\n    def ema_scope(self, context=None):\n        if self.use_ema:\n            self.model_ema.store(self.parameters())\n            self.model_ema.copy_to(self)\n            if context is not None:\n                print(f\"{context}: Switched to EMA weights\")\n        try:\n            yield None\n        finally:\n            if self.use_ema:\n                self.model_ema.restore(self.parameters())\n                if context is not None:\n                    print(f\"{context}: Restored training weights\")\n\n    def on_train_batch_end(self, *args, **kwargs):\n        if self.use_ema:\n            self.model_ema(self)\n\n    def encode(self, x):\n        h = self.encoder(x)\n        moments = self.quant_conv(h)\n        posterior = DiagonalGaussianDistribution(moments)\n        return posterior\n\n    def decode(self, z):\n        z = self.post_quant_conv(z)\n        dec = self.decoder(z)\n        return dec\n\n    def forward(self, input, sample_posterior=True):\n        posterior = self.encode(input)\n        if sample_posterior:\n            z = posterior.sample()\n        else:\n            z = posterior.mode()\n        dec = self.decode(z)\n        return dec, posterior\n\n    def get_input(self, batch, k):\n        x = batch[k]\n        if len(x.shape) == 3:\n            x = x[..., None]\n        x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()\n        return x\n\n    def training_step(self, batch, batch_idx, optimizer_idx):\n        inputs = self.get_input(batch, self.image_key)\n        reconstructions, posterior = self(inputs)\n\n        if optimizer_idx == 0:\n            # train encoder+decoder+logvar\n            aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,\n                                            last_layer=self.get_last_layer(), split=\"train\")\n            self.log(\"aeloss\", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)\n            self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)\n            return aeloss\n\n        if optimizer_idx == 1:\n            # train the discriminator\n            discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,\n                                                last_layer=self.get_last_layer(), split=\"train\")\n\n            self.log(\"discloss\", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)\n            self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)\n            return discloss\n\n    def validation_step(self, batch, batch_idx):\n        log_dict = self._validation_step(batch, batch_idx)\n        with self.ema_scope():\n            log_dict_ema = self._validation_step(batch, batch_idx, postfix=\"_ema\")\n        return log_dict\n\n    def _validation_step(self, batch, batch_idx, postfix=\"\"):\n        inputs = self.get_input(batch, self.image_key)\n        reconstructions, posterior = self(inputs)\n        aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,\n                                        last_layer=self.get_last_layer(), split=\"val\"+postfix)\n\n        discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,\n                                            last_layer=self.get_last_layer(), split=\"val\"+postfix)\n\n        self.log(f\"val{postfix}/rec_loss\", log_dict_ae[f\"val{postfix}/rec_loss\"])\n        self.log_dict(log_dict_ae)\n        self.log_dict(log_dict_disc)\n        return self.log_dict\n\n    def configure_optimizers(self):\n        lr = self.learning_rate\n        ae_params_list = list(self.encoder.parameters()) + list(self.decoder.parameters()) + list(\n            self.quant_conv.parameters()) + list(self.post_quant_conv.parameters())\n        if self.learn_logvar:\n            print(f\"{self.__class__.__name__}: Learning logvar\")\n            ae_params_list.append(self.loss.logvar)\n        opt_ae = torch.optim.Adam(ae_params_list,\n                                  lr=lr, betas=(0.5, 0.9))\n        opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),\n                                    lr=lr, betas=(0.5, 0.9))\n        return [opt_ae, opt_disc], []\n\n    def get_last_layer(self):\n        return self.decoder.conv_out.weight\n\n    @torch.no_grad()\n    def log_images(self, batch, only_inputs=False, log_ema=False, **kwargs):\n        log = dict()\n        x = self.get_input(batch, self.image_key)\n        x = x.to(self.device)\n        if not only_inputs:\n            xrec, posterior = self(x)\n            if x.shape[1] > 3:\n                # colorize with random projection\n                assert xrec.shape[1] > 3\n                x = self.to_rgb(x)\n                xrec = self.to_rgb(xrec)\n            log[\"samples\"] = self.decode(torch.randn_like(posterior.sample()))\n            log[\"reconstructions\"] = xrec\n            if log_ema or self.use_ema:\n                with self.ema_scope():\n                    xrec_ema, posterior_ema = self(x)\n                    if x.shape[1] > 3:\n                        # colorize with random projection\n                        assert xrec_ema.shape[1] > 3\n                        xrec_ema = self.to_rgb(xrec_ema)\n                    log[\"samples_ema\"] = self.decode(torch.randn_like(posterior_ema.sample()))\n                    log[\"reconstructions_ema\"] = xrec_ema\n        log[\"inputs\"] = x\n        return log\n\n    def to_rgb(self, x):\n        assert self.image_key == \"segmentation\"\n        if not hasattr(self, \"colorize\"):\n            self.register_buffer(\"colorize\", torch.randn(3, x.shape[1], 1, 1).to(x))\n        x = F.conv2d(x, weight=self.colorize)\n        x = 2.*(x-x.min())/(x.max()-x.min()) - 1.\n        return x\n\n\nclass IdentityFirstStage(torch.nn.Module):\n    def __init__(self, *args, vq_interface=False, **kwargs):\n        self.vq_interface = vq_interface\n        super().__init__()\n\n    def encode(self, x, *args, **kwargs):\n        return x\n\n    def decode(self, x, *args, **kwargs):\n        return x\n\n    def quantize(self, x, *args, **kwargs):\n        if self.vq_interface:\n            return x, None, [None, None, None]\n        return x\n\n    def forward(self, x, *args, **kwargs):\n        return x\n\n"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/ddim.py",
    "content": "\"\"\"SAMPLING ONLY.\"\"\"\n\nimport torch\nimport numpy as np\nfrom tqdm import tqdm\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor\n\n\nclass DDIMSampler(object):\n    def __init__(self, model, schedule=\"linear\", **kwargs):\n        super().__init__()\n        self.model = model\n        self.ddpm_num_timesteps = model.num_timesteps\n        self.schedule = schedule\n\n    def register_buffer(self, name, attr):\n        if type(attr) == torch.Tensor:\n            if attr.device != torch.device(\"cuda\"):\n                attr = attr.to(torch.device(\"cuda\"))\n        setattr(self, name, attr)\n\n    def make_schedule(self, ddim_num_steps, ddim_discretize=\"uniform\", ddim_eta=0., verbose=True):\n        self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,\n                                                  num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)\n        alphas_cumprod = self.model.alphas_cumprod\n        assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'\n        to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)\n\n        self.register_buffer('betas', to_torch(self.model.betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))\n\n        # ddim sampling parameters\n        ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),\n                                                                                   ddim_timesteps=self.ddim_timesteps,\n                                                                                   eta=ddim_eta,verbose=verbose)\n        self.register_buffer('ddim_sigmas', ddim_sigmas)\n        self.register_buffer('ddim_alphas', ddim_alphas)\n        self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)\n        self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))\n        sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(\n            (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (\n                        1 - self.alphas_cumprod / self.alphas_cumprod_prev))\n        self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)\n\n    @torch.no_grad()\n    def sample(self,\n               S,\n               batch_size,\n               shape,\n               conditioning=None,\n               callback=None,\n               normals_sequence=None,\n               img_callback=None,\n               quantize_x0=False,\n               eta=0.,\n               mask=None,\n               x0=None,\n               temperature=1.,\n               noise_dropout=0.,\n               score_corrector=None,\n               corrector_kwargs=None,\n               verbose=True,\n               x_T=None,\n               log_every_t=100,\n               unconditional_guidance_scale=1.,\n               unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...\n               dynamic_threshold=None,\n               ucg_schedule=None,\n               **kwargs\n               ):\n        if conditioning is not None:\n            if isinstance(conditioning, dict):\n                ctmp = conditioning[list(conditioning.keys())[0]]\n                while isinstance(ctmp, list): ctmp = ctmp[0]\n                cbs = ctmp.shape[0]\n                if cbs != batch_size:\n                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n\n            elif isinstance(conditioning, list):\n                for ctmp in conditioning:\n                    if ctmp.shape[0] != batch_size:\n                        print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n\n            else:\n                if conditioning.shape[0] != batch_size:\n                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n\n        self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)\n        # sampling\n        C, H, W = shape\n        size = (batch_size, C, H, W)\n        print(f'Data shape for DDIM sampling is {size}, eta {eta}')\n\n        samples, intermediates = self.ddim_sampling(conditioning, size,\n                                                    callback=callback,\n                                                    img_callback=img_callback,\n                                                    quantize_denoised=quantize_x0,\n                                                    mask=mask, x0=x0,\n                                                    ddim_use_original_steps=False,\n                                                    noise_dropout=noise_dropout,\n                                                    temperature=temperature,\n                                                    score_corrector=score_corrector,\n                                                    corrector_kwargs=corrector_kwargs,\n                                                    x_T=x_T,\n                                                    log_every_t=log_every_t,\n                                                    unconditional_guidance_scale=unconditional_guidance_scale,\n                                                    unconditional_conditioning=unconditional_conditioning,\n                                                    dynamic_threshold=dynamic_threshold,\n                                                    ucg_schedule=ucg_schedule\n                                                    )\n        return samples, intermediates\n\n    @torch.no_grad()\n    def ddim_sampling(self, cond, shape,\n                      x_T=None, ddim_use_original_steps=False,\n                      callback=None, timesteps=None, quantize_denoised=False,\n                      mask=None, x0=None, img_callback=None, log_every_t=100,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None,\n                      ucg_schedule=None):\n        device = self.model.betas.device\n        b = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n\n        if timesteps is None:\n            timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps\n        elif timesteps is not None and not ddim_use_original_steps:\n            subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1\n            timesteps = self.ddim_timesteps[:subset_end]\n\n        intermediates = {'x_inter': [img], 'pred_x0': [img]}\n        time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)\n        total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]\n        print(f\"Running DDIM Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)\n\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((b,), step, device=device, dtype=torch.long)\n\n            if mask is not None:\n                assert x0 is not None\n                img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass?\n                img = img_orig * mask + (1. - mask) * img\n\n            if ucg_schedule is not None:\n                assert len(ucg_schedule) == len(time_range)\n                unconditional_guidance_scale = ucg_schedule[i]\n\n            outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,\n                                      quantize_denoised=quantize_denoised, temperature=temperature,\n                                      noise_dropout=noise_dropout, score_corrector=score_corrector,\n                                      corrector_kwargs=corrector_kwargs,\n                                      unconditional_guidance_scale=unconditional_guidance_scale,\n                                      unconditional_conditioning=unconditional_conditioning,\n                                      dynamic_threshold=dynamic_threshold)\n            img, pred_x0 = outs\n            if callback: callback(i)\n            if img_callback: img_callback(pred_x0, i)\n\n            if index % log_every_t == 0 or index == total_steps - 1:\n                intermediates['x_inter'].append(img)\n                intermediates['pred_x0'].append(pred_x0)\n\n        return img, intermediates\n\n    @torch.no_grad()\n    def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None,\n                      dynamic_threshold=None):\n        b, *_, device = *x.shape, x.device\n\n        if unconditional_conditioning is None or unconditional_guidance_scale == 1.:\n            model_output = self.model.apply_model(x, t, c)\n        else:\n            x_in = torch.cat([x] * 2)\n            t_in = torch.cat([t] * 2)\n            if isinstance(c, dict):\n                assert isinstance(unconditional_conditioning, dict)\n                c_in = dict()\n                for k in c:\n                    if isinstance(c[k], list):\n                        c_in[k] = [torch.cat([\n                            unconditional_conditioning[k][i],\n                            c[k][i]]) for i in range(len(c[k]))]\n                    else:\n                        c_in[k] = torch.cat([\n                                unconditional_conditioning[k],\n                                c[k]])\n            elif isinstance(c, list):\n                c_in = list()\n                assert isinstance(unconditional_conditioning, list)\n                for i in range(len(c)):\n                    c_in.append(torch.cat([unconditional_conditioning[i], c[i]]))\n            else:\n                c_in = torch.cat([unconditional_conditioning, c])\n            model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)\n            model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond)\n\n        if self.model.parameterization == \"v\":\n            e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)\n        else:\n            e_t = model_output\n\n        if score_corrector is not None:\n            assert self.model.parameterization == \"eps\", 'not implemented'\n            e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)\n\n        alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas\n        alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev\n        sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas\n        sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n        # select parameters corresponding to the currently considered timestep\n        a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)\n        a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)\n        sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)\n        sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)\n\n        # current prediction for x_0\n        if self.model.parameterization != \"v\":\n            pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()\n        else:\n            pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)\n\n        if quantize_denoised:\n            pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)\n\n        if dynamic_threshold is not None:\n            raise NotImplementedError()\n\n        # direction pointing to x_t\n        dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t\n        noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature\n        if noise_dropout > 0.:\n            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n        x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise\n        return x_prev, pred_x0\n\n    @torch.no_grad()\n    def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None,\n               unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None):\n        num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0]\n\n        assert t_enc <= num_reference_steps\n        num_steps = t_enc\n\n        if use_original_steps:\n            alphas_next = self.alphas_cumprod[:num_steps]\n            alphas = self.alphas_cumprod_prev[:num_steps]\n        else:\n            alphas_next = self.ddim_alphas[:num_steps]\n            alphas = torch.tensor(self.ddim_alphas_prev[:num_steps])\n\n        x_next = x0\n        intermediates = []\n        inter_steps = []\n        for i in tqdm(range(num_steps), desc='Encoding Image'):\n            t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long)\n            if unconditional_guidance_scale == 1.:\n                noise_pred = self.model.apply_model(x_next, t, c)\n            else:\n                assert unconditional_conditioning is not None\n                e_t_uncond, noise_pred = torch.chunk(\n                    self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)),\n                                           torch.cat((unconditional_conditioning, c))), 2)\n                noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond)\n\n            xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next\n            weighted_noise_pred = alphas_next[i].sqrt() * (\n                    (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred\n            x_next = xt_weighted + weighted_noise_pred\n            if return_intermediates and i % (\n                    num_steps // return_intermediates) == 0 and i < num_steps - 1:\n                intermediates.append(x_next)\n                inter_steps.append(i)\n            elif return_intermediates and i >= num_steps - 2:\n                intermediates.append(x_next)\n                inter_steps.append(i)\n            if callback: callback(i)\n\n        out = {'x_encoded': x_next, 'intermediate_steps': inter_steps}\n        if return_intermediates:\n            out.update({'intermediates': intermediates})\n        return x_next, out\n\n    @torch.no_grad()\n    def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):\n        # fast, but does not allow for exact reconstruction\n        # t serves as an index to gather the correct alphas\n        if use_original_steps:\n            sqrt_alphas_cumprod = self.sqrt_alphas_cumprod\n            sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod\n        else:\n            sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)\n            sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas\n\n        if noise is None:\n            noise = torch.randn_like(x0)\n        return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +\n                extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)\n\n    @torch.no_grad()\n    def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,\n               use_original_steps=False, callback=None):\n\n        timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps\n        timesteps = timesteps[:t_start]\n\n        time_range = np.flip(timesteps)\n        total_steps = timesteps.shape[0]\n        print(f\"Running DDIM Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='Decoding image', total=total_steps)\n        x_dec = x_latent\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)\n            x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,\n                                          unconditional_guidance_scale=unconditional_guidance_scale,\n                                          unconditional_conditioning=unconditional_conditioning)\n            if callback: callback(i)\n        return x_dec"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/ddpm.py",
    "content": "\"\"\"\nwild mixture of\nhttps://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py\nhttps://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py\nhttps://github.com/CompVis/taming-transformers\n-- merci\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport pytorch_lightning as pl\nfrom torch.optim.lr_scheduler import LambdaLR\nfrom einops import rearrange, repeat\nfrom contextlib import contextmanager, nullcontext\nfrom functools import partial\nimport itertools\nfrom tqdm import tqdm\nfrom torchvision.utils import make_grid\nfrom pytorch_lightning.utilities import rank_zero_only\nfrom omegaconf import ListConfig\n\nfrom ToonCrafter.ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config\nfrom ToonCrafter.ldm.modules.ema import LitEma\nfrom ToonCrafter.ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution\nfrom ToonCrafter.ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like\nfrom ToonCrafter.ldm.models.diffusion.ddim import DDIMSampler\n\n\n__conditioning_keys__ = {'concat': 'c_concat',\n                         'crossattn': 'c_crossattn',\n                         'adm': 'y'}\n\n\ndef disabled_train(self, mode=True):\n    \"\"\"Overwrite model.train with this function to make sure train/eval mode\n    does not change anymore.\"\"\"\n    return self\n\n\ndef uniform_on_device(r1, r2, shape, device):\n    return (r1 - r2) * torch.rand(*shape, device=device) + r2\n\n\nclass DDPM(pl.LightningModule):\n    # classic DDPM with Gaussian diffusion, in image space\n    def __init__(self,\n                 unet_config,\n                 timesteps=1000,\n                 beta_schedule=\"linear\",\n                 loss_type=\"l2\",\n                 ckpt_path=None,\n                 ignore_keys=[],\n                 load_only_unet=False,\n                 monitor=\"val/loss\",\n                 use_ema=True,\n                 first_stage_key=\"image\",\n                 image_size=256,\n                 channels=3,\n                 log_every_t=100,\n                 clip_denoised=True,\n                 linear_start=1e-4,\n                 linear_end=2e-2,\n                 cosine_s=8e-3,\n                 given_betas=None,\n                 original_elbo_weight=0.,\n                 v_posterior=0.,  # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta\n                 l_simple_weight=1.,\n                 conditioning_key=None,\n                 parameterization=\"eps\",  # all assuming fixed variance schedules\n                 scheduler_config=None,\n                 use_positional_encodings=False,\n                 learn_logvar=False,\n                 logvar_init=0.,\n                 make_it_fit=False,\n                 ucg_training=None,\n                 reset_ema=False,\n                 reset_num_ema_updates=False,\n                 ):\n        super().__init__()\n        assert parameterization in [\"eps\", \"x0\", \"v\"], 'currently only supporting \"eps\" and \"x0\" and \"v\"'\n        self.parameterization = parameterization\n        print(f\"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode\")\n        self.cond_stage_model = None\n        self.clip_denoised = clip_denoised\n        self.log_every_t = log_every_t\n        self.first_stage_key = first_stage_key\n        self.image_size = image_size  # try conv?\n        self.channels = channels\n        self.use_positional_encodings = use_positional_encodings\n        self.model = DiffusionWrapper(unet_config, conditioning_key)\n        count_params(self.model, verbose=True)\n        self.use_ema = use_ema\n        if self.use_ema:\n            self.model_ema = LitEma(self.model)\n            print(f\"Keeping EMAs of {len(list(self.model_ema.buffers()))}.\")\n\n        self.use_scheduler = scheduler_config is not None\n        if self.use_scheduler:\n            self.scheduler_config = scheduler_config\n\n        self.v_posterior = v_posterior\n        self.original_elbo_weight = original_elbo_weight\n        self.l_simple_weight = l_simple_weight\n\n        if monitor is not None:\n            self.monitor = monitor\n        self.make_it_fit = make_it_fit\n        if reset_ema: assert exists(ckpt_path)\n        if ckpt_path is not None:\n            self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)\n            if reset_ema:\n                assert self.use_ema\n                print(f\"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.\")\n                self.model_ema = LitEma(self.model)\n        if reset_num_ema_updates:\n            print(\" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ \")\n            assert self.use_ema\n            self.model_ema.reset_num_updates()\n\n        self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,\n                               linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)\n\n        self.loss_type = loss_type\n\n        self.learn_logvar = learn_logvar\n        logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))\n        if self.learn_logvar:\n            self.logvar = nn.Parameter(self.logvar, requires_grad=True)\n        else:\n            self.register_buffer('logvar', logvar)\n\n        self.ucg_training = ucg_training or dict()\n        if self.ucg_training:\n            self.ucg_prng = np.random.RandomState()\n\n    def register_schedule(self, given_betas=None, beta_schedule=\"linear\", timesteps=1000,\n                          linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n        if exists(given_betas):\n            betas = given_betas\n        else:\n            betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,\n                                       cosine_s=cosine_s)\n        alphas = 1. - betas\n        alphas_cumprod = np.cumprod(alphas, axis=0)\n        alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])\n\n        timesteps, = betas.shape\n        self.num_timesteps = int(timesteps)\n        self.linear_start = linear_start\n        self.linear_end = linear_end\n        assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'\n\n        to_torch = partial(torch.tensor, dtype=torch.float32)\n\n        self.register_buffer('betas', to_torch(betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))\n\n        # calculations for posterior q(x_{t-1} | x_t, x_0)\n        posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (\n                1. - alphas_cumprod) + self.v_posterior * betas\n        # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)\n        self.register_buffer('posterior_variance', to_torch(posterior_variance))\n        # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain\n        self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))\n        self.register_buffer('posterior_mean_coef1', to_torch(\n            betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))\n        self.register_buffer('posterior_mean_coef2', to_torch(\n            (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))\n\n        if self.parameterization == \"eps\":\n            lvlb_weights = self.betas ** 2 / (\n                    2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))\n        elif self.parameterization == \"x0\":\n            lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))\n        elif self.parameterization == \"v\":\n            lvlb_weights = torch.ones_like(self.betas ** 2 / (\n                    2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)))\n        else:\n            raise NotImplementedError(\"mu not supported\")\n        lvlb_weights[0] = lvlb_weights[1]\n        self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)\n        assert not torch.isnan(self.lvlb_weights).all()\n\n    @contextmanager\n    def ema_scope(self, context=None):\n        if self.use_ema:\n            self.model_ema.store(self.model.parameters())\n            self.model_ema.copy_to(self.model)\n            if context is not None:\n                print(f\"{context}: Switched to EMA weights\")\n        try:\n            yield None\n        finally:\n            if self.use_ema:\n                self.model_ema.restore(self.model.parameters())\n                if context is not None:\n                    print(f\"{context}: Restored training weights\")\n\n    @torch.no_grad()\n    def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):\n        sd = torch.load(path, map_location=\"cpu\")\n        if \"state_dict\" in list(sd.keys()):\n            sd = sd[\"state_dict\"]\n        keys = list(sd.keys())\n        for k in keys:\n            for ik in ignore_keys:\n                if k.startswith(ik):\n                    print(\"Deleting key {} from state_dict.\".format(k))\n                    del sd[k]\n        if self.make_it_fit:\n            n_params = len([name for name, _ in\n                            itertools.chain(self.named_parameters(),\n                                            self.named_buffers())])\n            for name, param in tqdm(\n                    itertools.chain(self.named_parameters(),\n                                    self.named_buffers()),\n                    desc=\"Fitting old weights to new weights\",\n                    total=n_params\n            ):\n                if not name in sd:\n                    continue\n                old_shape = sd[name].shape\n                new_shape = param.shape\n                assert len(old_shape) == len(new_shape)\n                if len(new_shape) > 2:\n                    # we only modify first two axes\n                    assert new_shape[2:] == old_shape[2:]\n                # assumes first axis corresponds to output dim\n                if not new_shape == old_shape:\n                    new_param = param.clone()\n                    old_param = sd[name]\n                    if len(new_shape) == 1:\n                        for i in range(new_param.shape[0]):\n                            new_param[i] = old_param[i % old_shape[0]]\n                    elif len(new_shape) >= 2:\n                        for i in range(new_param.shape[0]):\n                            for j in range(new_param.shape[1]):\n                                new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]]\n\n                        n_used_old = torch.ones(old_shape[1])\n                        for j in range(new_param.shape[1]):\n                            n_used_old[j % old_shape[1]] += 1\n                        n_used_new = torch.zeros(new_shape[1])\n                        for j in range(new_param.shape[1]):\n                            n_used_new[j] = n_used_old[j % old_shape[1]]\n\n                        n_used_new = n_used_new[None, :]\n                        while len(n_used_new.shape) < len(new_shape):\n                            n_used_new = n_used_new.unsqueeze(-1)\n                        new_param /= n_used_new\n\n                    sd[name] = new_param\n\n        missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(\n            sd, strict=False)\n        print(f\"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys\")\n        if len(missing) > 0:\n            print(f\"Missing Keys:\\n {missing}\")\n        if len(unexpected) > 0:\n            print(f\"\\nUnexpected Keys:\\n {unexpected}\")\n\n    def q_mean_variance(self, x_start, t):\n        \"\"\"\n        Get the distribution q(x_t | x_0).\n        :param x_start: the [N x C x ...] tensor of noiseless inputs.\n        :param t: the number of diffusion steps (minus 1). Here, 0 means one step.\n        :return: A tuple (mean, variance, log_variance), all of x_start's shape.\n        \"\"\"\n        mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)\n        variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)\n        log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)\n        return mean, variance, log_variance\n\n    def predict_start_from_noise(self, x_t, t, noise):\n        return (\n                extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -\n                extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise\n        )\n\n    def predict_start_from_z_and_v(self, x_t, t, v):\n        # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))\n        # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))\n        return (\n                extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t -\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v\n        )\n\n    def predict_eps_from_z_and_v(self, x_t, t, v):\n        return (\n                extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v +\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t\n        )\n\n    def q_posterior(self, x_start, x_t, t):\n        posterior_mean = (\n                extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +\n                extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t\n        )\n        posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)\n        posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)\n        return posterior_mean, posterior_variance, posterior_log_variance_clipped\n\n    def p_mean_variance(self, x, t, clip_denoised: bool):\n        model_out = self.model(x, t)\n        if self.parameterization == \"eps\":\n            x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)\n        elif self.parameterization == \"x0\":\n            x_recon = model_out\n        if clip_denoised:\n            x_recon.clamp_(-1., 1.)\n\n        model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)\n        return model_mean, posterior_variance, posterior_log_variance\n\n    @torch.no_grad()\n    def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):\n        b, *_, device = *x.shape, x.device\n        model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)\n        noise = noise_like(x.shape, device, repeat_noise)\n        # no noise when t == 0\n        nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))\n        return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise\n\n    @torch.no_grad()\n    def p_sample_loop(self, shape, return_intermediates=False):\n        device = self.betas.device\n        b = shape[0]\n        img = torch.randn(shape, device=device)\n        intermediates = [img]\n        for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):\n            img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),\n                                clip_denoised=self.clip_denoised)\n            if i % self.log_every_t == 0 or i == self.num_timesteps - 1:\n                intermediates.append(img)\n        if return_intermediates:\n            return img, intermediates\n        return img\n\n    @torch.no_grad()\n    def sample(self, batch_size=16, return_intermediates=False):\n        image_size = self.image_size\n        channels = self.channels\n        return self.p_sample_loop((batch_size, channels, image_size, image_size),\n                                  return_intermediates=return_intermediates)\n\n    def q_sample(self, x_start, t, noise=None):\n        noise = default(noise, lambda: torch.randn_like(x_start))\n        return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)\n\n    def get_v(self, x, noise, t):\n        return (\n                extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise -\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x\n        )\n\n    def get_loss(self, pred, target, mean=True):\n        if self.loss_type == 'l1':\n            loss = (target - pred).abs()\n            if mean:\n                loss = loss.mean()\n        elif self.loss_type == 'l2':\n            if mean:\n                loss = torch.nn.functional.mse_loss(target, pred)\n            else:\n                loss = torch.nn.functional.mse_loss(target, pred, reduction='none')\n        else:\n            raise NotImplementedError(\"unknown loss type '{loss_type}'\")\n\n        return loss\n\n    def p_losses(self, x_start, t, noise=None):\n        noise = default(noise, lambda: torch.randn_like(x_start))\n        x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)\n        model_out = self.model(x_noisy, t)\n\n        loss_dict = {}\n        if self.parameterization == \"eps\":\n            target = noise\n        elif self.parameterization == \"x0\":\n            target = x_start\n        elif self.parameterization == \"v\":\n            target = self.get_v(x_start, noise, t)\n        else:\n            raise NotImplementedError(f\"Parameterization {self.parameterization} not yet supported\")\n\n        loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])\n\n        log_prefix = 'train' if self.training else 'val'\n\n        loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})\n        loss_simple = loss.mean() * self.l_simple_weight\n\n        loss_vlb = (self.lvlb_weights[t] * loss).mean()\n        loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})\n\n        loss = loss_simple + self.original_elbo_weight * loss_vlb\n\n        loss_dict.update({f'{log_prefix}/loss': loss})\n\n        return loss, loss_dict\n\n    def forward(self, x, *args, **kwargs):\n        # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size\n        # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'\n        t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()\n        return self.p_losses(x, t, *args, **kwargs)\n\n    def get_input(self, batch, k):\n        x = batch[k]\n        if len(x.shape) == 3:\n            x = x[..., None]\n        x = rearrange(x, 'b h w c -> b c h w')\n        x = x.to(memory_format=torch.contiguous_format).float()\n        return x\n\n    def shared_step(self, batch):\n        x = self.get_input(batch, self.first_stage_key)\n        loss, loss_dict = self(x)\n        return loss, loss_dict\n\n    def training_step(self, batch, batch_idx):\n        for k in self.ucg_training:\n            p = self.ucg_training[k][\"p\"]\n            val = self.ucg_training[k][\"val\"]\n            if val is None:\n                val = \"\"\n            for i in range(len(batch[k])):\n                if self.ucg_prng.choice(2, p=[1 - p, p]):\n                    batch[k][i] = val\n\n        loss, loss_dict = self.shared_step(batch)\n\n        self.log_dict(loss_dict, prog_bar=True,\n                      logger=True, on_step=True, on_epoch=True)\n\n        self.log(\"global_step\", self.global_step,\n                 prog_bar=True, logger=True, on_step=True, on_epoch=False)\n\n        if self.use_scheduler:\n            lr = self.optimizers().param_groups[0]['lr']\n            self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)\n\n        return loss\n\n    @torch.no_grad()\n    def validation_step(self, batch, batch_idx):\n        _, loss_dict_no_ema = self.shared_step(batch)\n        with self.ema_scope():\n            _, loss_dict_ema = self.shared_step(batch)\n            loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}\n        self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)\n        self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)\n\n    def on_train_batch_end(self, *args, **kwargs):\n        if self.use_ema:\n            self.model_ema(self.model)\n\n    def _get_rows_from_list(self, samples):\n        n_imgs_per_row = len(samples)\n        denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')\n        denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')\n        denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)\n        return denoise_grid\n\n    @torch.no_grad()\n    def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):\n        log = dict()\n        x = self.get_input(batch, self.first_stage_key)\n        N = min(x.shape[0], N)\n        n_row = min(x.shape[0], n_row)\n        x = x.to(self.device)[:N]\n        log[\"inputs\"] = x\n\n        # get diffusion row\n        diffusion_row = list()\n        x_start = x[:n_row]\n\n        for t in range(self.num_timesteps):\n            if t % self.log_every_t == 0 or t == self.num_timesteps - 1:\n                t = repeat(torch.tensor([t]), '1 -> b', b=n_row)\n                t = t.to(self.device).long()\n                noise = torch.randn_like(x_start)\n                x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)\n                diffusion_row.append(x_noisy)\n\n        log[\"diffusion_row\"] = self._get_rows_from_list(diffusion_row)\n\n        if sample:\n            # get denoise row\n            with self.ema_scope(\"Plotting\"):\n                samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)\n\n            log[\"samples\"] = samples\n            log[\"denoise_row\"] = self._get_rows_from_list(denoise_row)\n\n        if return_keys:\n            if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:\n                return log\n            else:\n                return {key: log[key] for key in return_keys}\n        return log\n\n    def configure_optimizers(self):\n        lr = self.learning_rate\n        params = list(self.model.parameters())\n        if self.learn_logvar:\n            params = params + [self.logvar]\n        opt = torch.optim.AdamW(params, lr=lr)\n        return opt\n\n\nclass LatentDiffusion(DDPM):\n    \"\"\"main class\"\"\"\n\n    def __init__(self,\n                 first_stage_config,\n                 cond_stage_config,\n                 num_timesteps_cond=None,\n                 cond_stage_key=\"image\",\n                 cond_stage_trainable=False,\n                 concat_mode=True,\n                 cond_stage_forward=None,\n                 conditioning_key=None,\n                 scale_factor=1.0,\n                 scale_by_std=False,\n                 force_null_conditioning=False,\n                 *args, **kwargs):\n        self.force_null_conditioning = force_null_conditioning\n        self.num_timesteps_cond = default(num_timesteps_cond, 1)\n        self.scale_by_std = scale_by_std\n        assert self.num_timesteps_cond <= kwargs['timesteps']\n        # for backwards compatibility after implementation of DiffusionWrapper\n        if conditioning_key is None:\n            conditioning_key = 'concat' if concat_mode else 'crossattn'\n        if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning:\n            conditioning_key = None\n        ckpt_path = kwargs.pop(\"ckpt_path\", None)\n        reset_ema = kwargs.pop(\"reset_ema\", False)\n        reset_num_ema_updates = kwargs.pop(\"reset_num_ema_updates\", False)\n        ignore_keys = kwargs.pop(\"ignore_keys\", [])\n        super().__init__(conditioning_key=conditioning_key, *args, **kwargs)\n        self.concat_mode = concat_mode\n        self.cond_stage_trainable = cond_stage_trainable\n        self.cond_stage_key = cond_stage_key\n        try:\n            self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1\n        except:\n            self.num_downs = 0\n        if not scale_by_std:\n            self.scale_factor = scale_factor\n        else:\n            self.register_buffer('scale_factor', torch.tensor(scale_factor))\n        self.instantiate_first_stage(first_stage_config)\n        self.instantiate_cond_stage(cond_stage_config)\n        self.cond_stage_forward = cond_stage_forward\n        self.clip_denoised = False\n        self.bbox_tokenizer = None\n\n        self.restarted_from_ckpt = False\n        if ckpt_path is not None:\n            self.init_from_ckpt(ckpt_path, ignore_keys)\n            self.restarted_from_ckpt = True\n            if reset_ema:\n                assert self.use_ema\n                print(\n                    f\"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.\")\n                self.model_ema = LitEma(self.model)\n        if reset_num_ema_updates:\n            print(\" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ \")\n            assert self.use_ema\n            self.model_ema.reset_num_updates()\n\n    def make_cond_schedule(self, ):\n        self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)\n        ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()\n        self.cond_ids[:self.num_timesteps_cond] = ids\n\n    @rank_zero_only\n    @torch.no_grad()\n    def on_train_batch_start(self, batch, batch_idx, dataloader_idx):\n        # only for very first batch\n        if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:\n            assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'\n            # set rescale weight to 1./std of encodings\n            print(\"### USING STD-RESCALING ###\")\n            x = super().get_input(batch, self.first_stage_key)\n            x = x.to(self.device)\n            encoder_posterior = self.encode_first_stage(x)\n            z = self.get_first_stage_encoding(encoder_posterior).detach()\n            del self.scale_factor\n            self.register_buffer('scale_factor', 1. / z.flatten().std())\n            print(f\"setting self.scale_factor to {self.scale_factor}\")\n            print(\"### USING STD-RESCALING ###\")\n\n    def register_schedule(self,\n                          given_betas=None, beta_schedule=\"linear\", timesteps=1000,\n                          linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n        super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)\n\n        self.shorten_cond_schedule = self.num_timesteps_cond > 1\n        if self.shorten_cond_schedule:\n            self.make_cond_schedule()\n\n    def instantiate_first_stage(self, config):\n        model = instantiate_from_config(config)\n        self.first_stage_model = model.eval()\n        self.first_stage_model.train = disabled_train\n        for param in self.first_stage_model.parameters():\n            param.requires_grad = False\n\n    def instantiate_cond_stage(self, config):\n        if not self.cond_stage_trainable:\n            if config == \"__is_first_stage__\":\n                print(\"Using first stage also as cond stage.\")\n                self.cond_stage_model = self.first_stage_model\n            elif config == \"__is_unconditional__\":\n                print(f\"Training {self.__class__.__name__} as an unconditional model.\")\n                self.cond_stage_model = None\n                # self.be_unconditional = True\n            else:\n                model = instantiate_from_config(config)\n                self.cond_stage_model = model.eval()\n                self.cond_stage_model.train = disabled_train\n                for param in self.cond_stage_model.parameters():\n                    param.requires_grad = False\n        else:\n            assert config != '__is_first_stage__'\n            assert config != '__is_unconditional__'\n            model = instantiate_from_config(config)\n            self.cond_stage_model = model\n\n    def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):\n        denoise_row = []\n        for zd in tqdm(samples, desc=desc):\n            denoise_row.append(self.decode_first_stage(zd.to(self.device),\n                                                       force_not_quantize=force_no_decoder_quantization))\n        n_imgs_per_row = len(denoise_row)\n        denoise_row = torch.stack(denoise_row)  # n_log_step, n_row, C, H, W\n        denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')\n        denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')\n        denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)\n        return denoise_grid\n\n    def get_first_stage_encoding(self, encoder_posterior):\n        if isinstance(encoder_posterior, DiagonalGaussianDistribution):\n            z = encoder_posterior.sample()\n        elif isinstance(encoder_posterior, torch.Tensor):\n            z = encoder_posterior\n        else:\n            raise NotImplementedError(f\"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented\")\n        return self.scale_factor * z\n\n    def get_learned_conditioning(self, c):\n        if self.cond_stage_forward is None:\n            if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):\n                c = self.cond_stage_model.encode(c)\n                if isinstance(c, DiagonalGaussianDistribution):\n                    c = c.mode()\n            else:\n                c = self.cond_stage_model(c)\n        else:\n            assert hasattr(self.cond_stage_model, self.cond_stage_forward)\n            c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)\n        return c\n\n    def meshgrid(self, h, w):\n        y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)\n        x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)\n\n        arr = torch.cat([y, x], dim=-1)\n        return arr\n\n    def delta_border(self, h, w):\n        \"\"\"\n        :param h: height\n        :param w: width\n        :return: normalized distance to image border,\n         wtith min distance = 0 at border and max dist = 0.5 at image center\n        \"\"\"\n        lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)\n        arr = self.meshgrid(h, w) / lower_right_corner\n        dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]\n        dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]\n        edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]\n        return edge_dist\n\n    def get_weighting(self, h, w, Ly, Lx, device):\n        weighting = self.delta_border(h, w)\n        weighting = torch.clip(weighting, self.split_input_params[\"clip_min_weight\"],\n                               self.split_input_params[\"clip_max_weight\"], )\n        weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)\n\n        if self.split_input_params[\"tie_braker\"]:\n            L_weighting = self.delta_border(Ly, Lx)\n            L_weighting = torch.clip(L_weighting,\n                                     self.split_input_params[\"clip_min_tie_weight\"],\n                                     self.split_input_params[\"clip_max_tie_weight\"])\n\n            L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)\n            weighting = weighting * L_weighting\n        return weighting\n\n    def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1):  # todo load once not every time, shorten code\n        \"\"\"\n        :param x: img of size (bs, c, h, w)\n        :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])\n        \"\"\"\n        bs, nc, h, w = x.shape\n\n        # number of crops in image\n        Ly = (h - kernel_size[0]) // stride[0] + 1\n        Lx = (w - kernel_size[1]) // stride[1] + 1\n\n        if uf == 1 and df == 1:\n            fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)\n            unfold = torch.nn.Unfold(**fold_params)\n\n            fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)\n\n            weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)\n            normalization = fold(weighting).view(1, 1, h, w)  # normalizes the overlap\n            weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))\n\n        elif uf > 1 and df == 1:\n            fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)\n            unfold = torch.nn.Unfold(**fold_params)\n\n            fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),\n                                dilation=1, padding=0,\n                                stride=(stride[0] * uf, stride[1] * uf))\n            fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)\n\n            weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)\n            normalization = fold(weighting).view(1, 1, h * uf, w * uf)  # normalizes the overlap\n            weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))\n\n        elif df > 1 and uf == 1:\n            fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)\n            unfold = torch.nn.Unfold(**fold_params)\n\n            fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),\n                                dilation=1, padding=0,\n                                stride=(stride[0] // df, stride[1] // df))\n            fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)\n\n            weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)\n            normalization = fold(weighting).view(1, 1, h // df, w // df)  # normalizes the overlap\n            weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))\n\n        else:\n            raise NotImplementedError\n\n        return fold, unfold, normalization, weighting\n\n    @torch.no_grad()\n    def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,\n                  cond_key=None, return_original_cond=False, bs=None, return_x=False):\n        x = super().get_input(batch, k)\n        if bs is not None:\n            x = x[:bs]\n        x = x.to(self.device)\n        encoder_posterior = self.encode_first_stage(x)\n        z = self.get_first_stage_encoding(encoder_posterior).detach()\n\n        if self.model.conditioning_key is not None and not self.force_null_conditioning:\n            if cond_key is None:\n                cond_key = self.cond_stage_key\n            if cond_key != self.first_stage_key:\n                if cond_key in ['caption', 'coordinates_bbox', \"txt\"]:\n                    xc = batch[cond_key]\n                elif cond_key in ['class_label', 'cls']:\n                    xc = batch\n                else:\n                    xc = super().get_input(batch, cond_key).to(self.device)\n            else:\n                xc = x\n            if not self.cond_stage_trainable or force_c_encode:\n                if isinstance(xc, dict) or isinstance(xc, list):\n                    c = self.get_learned_conditioning(xc)\n                else:\n                    c = self.get_learned_conditioning(xc.to(self.device))\n            else:\n                c = xc\n            if bs is not None:\n                c = c[:bs]\n\n            if self.use_positional_encodings:\n                pos_x, pos_y = self.compute_latent_shifts(batch)\n                ckey = __conditioning_keys__[self.model.conditioning_key]\n                c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}\n\n        else:\n            c = None\n            xc = None\n            if self.use_positional_encodings:\n                pos_x, pos_y = self.compute_latent_shifts(batch)\n                c = {'pos_x': pos_x, 'pos_y': pos_y}\n        out = [z, c]\n        if return_first_stage_outputs:\n            xrec = self.decode_first_stage(z)\n            out.extend([x, xrec])\n        if return_x:\n            out.extend([x])\n        if return_original_cond:\n            out.append(xc)\n        return out\n\n    @torch.no_grad()\n    def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):\n        if predict_cids:\n            if z.dim() == 4:\n                z = torch.argmax(z.exp(), dim=1).long()\n            z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)\n            z = rearrange(z, 'b h w c -> b c h w').contiguous()\n\n        z = 1. / self.scale_factor * z\n        return self.first_stage_model.decode(z)\n\n    @torch.no_grad()\n    def encode_first_stage(self, x):\n        return self.first_stage_model.encode(x)\n\n    def shared_step(self, batch, **kwargs):\n        x, c = self.get_input(batch, self.first_stage_key)\n        loss = self(x, c)\n        return loss\n\n    def forward(self, x, c, *args, **kwargs):\n        t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()\n        if self.model.conditioning_key is not None:\n            assert c is not None\n            if self.cond_stage_trainable:\n                c = self.get_learned_conditioning(c)\n            if self.shorten_cond_schedule:  # TODO: drop this option\n                tc = self.cond_ids[t].to(self.device)\n                c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))\n        return self.p_losses(x, c, t, *args, **kwargs)\n\n    def apply_model(self, x_noisy, t, cond, return_ids=False):\n        if isinstance(cond, dict):\n            # hybrid case, cond is expected to be a dict\n            pass\n        else:\n            if not isinstance(cond, list):\n                cond = [cond]\n            key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'\n            cond = {key: cond}\n\n        x_recon = self.model(x_noisy, t, **cond)\n\n        if isinstance(x_recon, tuple) and not return_ids:\n            return x_recon[0]\n        else:\n            return x_recon\n\n    def _predict_eps_from_xstart(self, x_t, t, pred_xstart):\n        return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \\\n               extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)\n\n    def _prior_bpd(self, x_start):\n        \"\"\"\n        Get the prior KL term for the variational lower-bound, measured in\n        bits-per-dim.\n        This term can't be optimized, as it only depends on the encoder.\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :return: a batch of [N] KL values (in bits), one per batch element.\n        \"\"\"\n        batch_size = x_start.shape[0]\n        t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)\n        qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)\n        kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)\n        return mean_flat(kl_prior) / np.log(2.0)\n\n    def p_losses(self, x_start, cond, t, noise=None):\n        noise = default(noise, lambda: torch.randn_like(x_start))\n        x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)\n        model_output = self.apply_model(x_noisy, t, cond)\n\n        loss_dict = {}\n        prefix = 'train' if self.training else 'val'\n\n        if self.parameterization == \"x0\":\n            target = x_start\n        elif self.parameterization == \"eps\":\n            target = noise\n        elif self.parameterization == \"v\":\n            target = self.get_v(x_start, noise, t)\n        else:\n            raise NotImplementedError()\n\n        loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])\n        loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})\n\n        logvar_t = self.logvar[t].to(self.device)\n        loss = loss_simple / torch.exp(logvar_t) + logvar_t\n        # loss = loss_simple / torch.exp(self.logvar) + self.logvar\n        if self.learn_logvar:\n            loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})\n            loss_dict.update({'logvar': self.logvar.data.mean()})\n\n        loss = self.l_simple_weight * loss.mean()\n\n        loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))\n        loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()\n        loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})\n        loss += (self.original_elbo_weight * loss_vlb)\n        loss_dict.update({f'{prefix}/loss': loss})\n\n        return loss, loss_dict\n\n    def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,\n                        return_x0=False, score_corrector=None, corrector_kwargs=None):\n        t_in = t\n        model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)\n\n        if score_corrector is not None:\n            assert self.parameterization == \"eps\"\n            model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)\n\n        if return_codebook_ids:\n            model_out, logits = model_out\n\n        if self.parameterization == \"eps\":\n            x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)\n        elif self.parameterization == \"x0\":\n            x_recon = model_out\n        else:\n            raise NotImplementedError()\n\n        if clip_denoised:\n            x_recon.clamp_(-1., 1.)\n        if quantize_denoised:\n            x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)\n        model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)\n        if return_codebook_ids:\n            return model_mean, posterior_variance, posterior_log_variance, logits\n        elif return_x0:\n            return model_mean, posterior_variance, posterior_log_variance, x_recon\n        else:\n            return model_mean, posterior_variance, posterior_log_variance\n\n    @torch.no_grad()\n    def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,\n                 return_codebook_ids=False, quantize_denoised=False, return_x0=False,\n                 temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):\n        b, *_, device = *x.shape, x.device\n        outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,\n                                       return_codebook_ids=return_codebook_ids,\n                                       quantize_denoised=quantize_denoised,\n                                       return_x0=return_x0,\n                                       score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)\n        if return_codebook_ids:\n            raise DeprecationWarning(\"Support dropped.\")\n            model_mean, _, model_log_variance, logits = outputs\n        elif return_x0:\n            model_mean, _, model_log_variance, x0 = outputs\n        else:\n            model_mean, _, model_log_variance = outputs\n\n        noise = noise_like(x.shape, device, repeat_noise) * temperature\n        if noise_dropout > 0.:\n            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n        # no noise when t == 0\n        nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))\n\n        if return_codebook_ids:\n            return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)\n        if return_x0:\n            return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0\n        else:\n            return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise\n\n    @torch.no_grad()\n    def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,\n                              img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,\n                              score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,\n                              log_every_t=None):\n        if not log_every_t:\n            log_every_t = self.log_every_t\n        timesteps = self.num_timesteps\n        if batch_size is not None:\n            b = batch_size if batch_size is not None else shape[0]\n            shape = [batch_size] + list(shape)\n        else:\n            b = batch_size = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=self.device)\n        else:\n            img = x_T\n        intermediates = []\n        if cond is not None:\n            if isinstance(cond, dict):\n                cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else\n                list(map(lambda x: x[:batch_size], cond[key])) for key in cond}\n            else:\n                cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]\n\n        if start_T is not None:\n            timesteps = min(timesteps, start_T)\n        iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',\n                        total=timesteps) if verbose else reversed(\n            range(0, timesteps))\n        if type(temperature) == float:\n            temperature = [temperature] * timesteps\n\n        for i in iterator:\n            ts = torch.full((b,), i, device=self.device, dtype=torch.long)\n            if self.shorten_cond_schedule:\n                assert self.model.conditioning_key != 'hybrid'\n                tc = self.cond_ids[ts].to(cond.device)\n                cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))\n\n            img, x0_partial = self.p_sample(img, cond, ts,\n                                            clip_denoised=self.clip_denoised,\n                                            quantize_denoised=quantize_denoised, return_x0=True,\n                                            temperature=temperature[i], noise_dropout=noise_dropout,\n                                            score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)\n            if mask is not None:\n                assert x0 is not None\n                img_orig = self.q_sample(x0, ts)\n                img = img_orig * mask + (1. - mask) * img\n\n            if i % log_every_t == 0 or i == timesteps - 1:\n                intermediates.append(x0_partial)\n            if callback: callback(i)\n            if img_callback: img_callback(img, i)\n        return img, intermediates\n\n    @torch.no_grad()\n    def p_sample_loop(self, cond, shape, return_intermediates=False,\n                      x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,\n                      mask=None, x0=None, img_callback=None, start_T=None,\n                      log_every_t=None):\n\n        if not log_every_t:\n            log_every_t = self.log_every_t\n        device = self.betas.device\n        b = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n\n        intermediates = [img]\n        if timesteps is None:\n            timesteps = self.num_timesteps\n\n        if start_T is not None:\n            timesteps = min(timesteps, start_T)\n        iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(\n            range(0, timesteps))\n\n        if mask is not None:\n            assert x0 is not None\n            assert x0.shape[2:3] == mask.shape[2:3]  # spatial size has to match\n\n        for i in iterator:\n            ts = torch.full((b,), i, device=device, dtype=torch.long)\n            if self.shorten_cond_schedule:\n                assert self.model.conditioning_key != 'hybrid'\n                tc = self.cond_ids[ts].to(cond.device)\n                cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))\n\n            img = self.p_sample(img, cond, ts,\n                                clip_denoised=self.clip_denoised,\n                                quantize_denoised=quantize_denoised)\n            if mask is not None:\n                img_orig = self.q_sample(x0, ts)\n                img = img_orig * mask + (1. - mask) * img\n\n            if i % log_every_t == 0 or i == timesteps - 1:\n                intermediates.append(img)\n            if callback: callback(i)\n            if img_callback: img_callback(img, i)\n\n        if return_intermediates:\n            return img, intermediates\n        return img\n\n    @torch.no_grad()\n    def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,\n               verbose=True, timesteps=None, quantize_denoised=False,\n               mask=None, x0=None, shape=None, **kwargs):\n        if shape is None:\n            shape = (batch_size, self.channels, self.image_size, self.image_size)\n        if cond is not None:\n            if isinstance(cond, dict):\n                cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else\n                list(map(lambda x: x[:batch_size], cond[key])) for key in cond}\n            else:\n                cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]\n        return self.p_sample_loop(cond,\n                                  shape,\n                                  return_intermediates=return_intermediates, x_T=x_T,\n                                  verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,\n                                  mask=mask, x0=x0)\n\n    @torch.no_grad()\n    def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):\n        if ddim:\n            ddim_sampler = DDIMSampler(self)\n            shape = (self.channels, self.image_size, self.image_size)\n            samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size,\n                                                         shape, cond, verbose=False, **kwargs)\n\n        else:\n            samples, intermediates = self.sample(cond=cond, batch_size=batch_size,\n                                                 return_intermediates=True, **kwargs)\n\n        return samples, intermediates\n\n    @torch.no_grad()\n    def get_unconditional_conditioning(self, batch_size, null_label=None):\n        if null_label is not None:\n            xc = null_label\n            if isinstance(xc, ListConfig):\n                xc = list(xc)\n            if isinstance(xc, dict) or isinstance(xc, list):\n                c = self.get_learned_conditioning(xc)\n            else:\n                if hasattr(xc, \"to\"):\n                    xc = xc.to(self.device)\n                c = self.get_learned_conditioning(xc)\n        else:\n            if self.cond_stage_key in [\"class_label\", \"cls\"]:\n                xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)\n                return self.get_learned_conditioning(xc)\n            else:\n                raise NotImplementedError(\"todo\")\n        if isinstance(c, list):  # in case the encoder gives us a list\n            for i in range(len(c)):\n                c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)\n        else:\n            c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)\n        return c\n\n    @torch.no_grad()\n    def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None,\n                   quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,\n                   plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,\n                   use_ema_scope=True,\n                   **kwargs):\n        ema_scope = self.ema_scope if use_ema_scope else nullcontext\n        use_ddim = ddim_steps is not None\n\n        log = dict()\n        z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,\n                                           return_first_stage_outputs=True,\n                                           force_c_encode=True,\n                                           return_original_cond=True,\n                                           bs=N)\n        N = min(x.shape[0], N)\n        n_row = min(x.shape[0], n_row)\n        log[\"inputs\"] = x\n        log[\"reconstruction\"] = xrec\n        if self.model.conditioning_key is not None:\n            if hasattr(self.cond_stage_model, \"decode\"):\n                xc = self.cond_stage_model.decode(c)\n                log[\"conditioning\"] = xc\n            elif self.cond_stage_key in [\"caption\", \"txt\"]:\n                xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)\n                log[\"conditioning\"] = xc\n            elif self.cond_stage_key in ['class_label', \"cls\"]:\n                try:\n                    xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[\"human_label\"], size=x.shape[2] // 25)\n                    log['conditioning'] = xc\n                except KeyError:\n                    # probably no \"human_label\" in batch\n                    pass\n            elif isimage(xc):\n                log[\"conditioning\"] = xc\n            if ismap(xc):\n                log[\"original_conditioning\"] = self.to_rgb(xc)\n\n        if plot_diffusion_rows:\n            # get diffusion row\n            diffusion_row = list()\n            z_start = z[:n_row]\n            for t in range(self.num_timesteps):\n                if t % self.log_every_t == 0 or t == self.num_timesteps - 1:\n                    t = repeat(torch.tensor([t]), '1 -> b', b=n_row)\n                    t = t.to(self.device).long()\n                    noise = torch.randn_like(z_start)\n                    z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)\n                    diffusion_row.append(self.decode_first_stage(z_noisy))\n\n            diffusion_row = torch.stack(diffusion_row)  # n_log_step, n_row, C, H, W\n            diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')\n            diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')\n            diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])\n            log[\"diffusion_row\"] = diffusion_grid\n\n        if sample:\n            # get denoise row\n            with ema_scope(\"Sampling\"):\n                samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                         ddim_steps=ddim_steps, eta=ddim_eta)\n                # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)\n            x_samples = self.decode_first_stage(samples)\n            log[\"samples\"] = x_samples\n            if plot_denoise_rows:\n                denoise_grid = self._get_denoise_row_from_list(z_denoise_row)\n                log[\"denoise_row\"] = denoise_grid\n\n            if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(\n                    self.first_stage_model, IdentityFirstStage):\n                # also display when quantizing x0 while sampling\n                with ema_scope(\"Plotting Quantized Denoised\"):\n                    samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                             ddim_steps=ddim_steps, eta=ddim_eta,\n                                                             quantize_denoised=True)\n                    # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,\n                    #                                      quantize_denoised=True)\n                x_samples = self.decode_first_stage(samples.to(self.device))\n                log[\"samples_x0_quantized\"] = x_samples\n\n        if unconditional_guidance_scale > 1.0:\n            uc = self.get_unconditional_conditioning(N, unconditional_guidance_label)\n            if self.model.conditioning_key == \"crossattn-adm\":\n                uc = {\"c_crossattn\": [uc], \"c_adm\": c[\"c_adm\"]}\n            with ema_scope(\"Sampling with classifier-free guidance\"):\n                samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                 ddim_steps=ddim_steps, eta=ddim_eta,\n                                                 unconditional_guidance_scale=unconditional_guidance_scale,\n                                                 unconditional_conditioning=uc,\n                                                 )\n                x_samples_cfg = self.decode_first_stage(samples_cfg)\n                log[f\"samples_cfg_scale_{unconditional_guidance_scale:.2f}\"] = x_samples_cfg\n\n        if inpaint:\n            # make a simple center square\n            b, h, w = z.shape[0], z.shape[2], z.shape[3]\n            mask = torch.ones(N, h, w).to(self.device)\n            # zeros will be filled in\n            mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.\n            mask = mask[:, None, ...]\n            with ema_scope(\"Plotting Inpaint\"):\n                samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,\n                                             ddim_steps=ddim_steps, x0=z[:N], mask=mask)\n            x_samples = self.decode_first_stage(samples.to(self.device))\n            log[\"samples_inpainting\"] = x_samples\n            log[\"mask\"] = mask\n\n            # outpaint\n            mask = 1. - mask\n            with ema_scope(\"Plotting Outpaint\"):\n                samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,\n                                             ddim_steps=ddim_steps, x0=z[:N], mask=mask)\n            x_samples = self.decode_first_stage(samples.to(self.device))\n            log[\"samples_outpainting\"] = x_samples\n\n        if plot_progressive_rows:\n            with ema_scope(\"Plotting Progressives\"):\n                img, progressives = self.progressive_denoising(c,\n                                                               shape=(self.channels, self.image_size, self.image_size),\n                                                               batch_size=N)\n            prog_row = self._get_denoise_row_from_list(progressives, desc=\"Progressive Generation\")\n            log[\"progressive_row\"] = prog_row\n\n        if return_keys:\n            if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:\n                return log\n            else:\n                return {key: log[key] for key in return_keys}\n        return log\n\n    def configure_optimizers(self):\n        lr = self.learning_rate\n        params = list(self.model.parameters())\n        if self.cond_stage_trainable:\n            print(f\"{self.__class__.__name__}: Also optimizing conditioner params!\")\n            params = params + list(self.cond_stage_model.parameters())\n        if self.learn_logvar:\n            print('Diffusion model optimizing logvar')\n            params.append(self.logvar)\n        opt = torch.optim.AdamW(params, lr=lr)\n        if self.use_scheduler:\n            assert 'target' in self.scheduler_config\n            scheduler = instantiate_from_config(self.scheduler_config)\n\n            print(\"Setting up LambdaLR scheduler...\")\n            scheduler = [\n                {\n                    'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),\n                    'interval': 'step',\n                    'frequency': 1\n                }]\n            return [opt], scheduler\n        return opt\n\n    @torch.no_grad()\n    def to_rgb(self, x):\n        x = x.float()\n        if not hasattr(self, \"colorize\"):\n            self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)\n        x = nn.functional.conv2d(x, weight=self.colorize)\n        x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.\n        return x\n\n\nclass DiffusionWrapper(pl.LightningModule):\n    def __init__(self, diff_model_config, conditioning_key):\n        super().__init__()\n        self.sequential_cross_attn = diff_model_config.pop(\"sequential_crossattn\", False)\n        self.diffusion_model = instantiate_from_config(diff_model_config)\n        self.conditioning_key = conditioning_key\n        assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm']\n\n    def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None):\n        if self.conditioning_key is None:\n            out = self.diffusion_model(x, t)\n        elif self.conditioning_key == 'concat':\n            xc = torch.cat([x] + c_concat, dim=1)\n            out = self.diffusion_model(xc, t)\n        elif self.conditioning_key == 'crossattn':\n            if not self.sequential_cross_attn:\n                cc = torch.cat(c_crossattn, 1)\n            else:\n                cc = c_crossattn\n            out = self.diffusion_model(x, t, context=cc)\n        elif self.conditioning_key == 'hybrid':\n            xc = torch.cat([x] + c_concat, dim=1)\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(xc, t, context=cc)\n        elif self.conditioning_key == 'hybrid-adm':\n            assert c_adm is not None\n            xc = torch.cat([x] + c_concat, dim=1)\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(xc, t, context=cc, y=c_adm)\n        elif self.conditioning_key == 'crossattn-adm':\n            assert c_adm is not None\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(x, t, context=cc, y=c_adm)\n        elif self.conditioning_key == 'adm':\n            cc = c_crossattn[0]\n            out = self.diffusion_model(x, t, y=cc)\n        else:\n            raise NotImplementedError()\n\n        return out\n\n\nclass LatentUpscaleDiffusion(LatentDiffusion):\n    def __init__(self, *args, low_scale_config, low_scale_key=\"LR\", noise_level_key=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        # assumes that neither the cond_stage nor the low_scale_model contain trainable params\n        assert not self.cond_stage_trainable\n        self.instantiate_low_stage(low_scale_config)\n        self.low_scale_key = low_scale_key\n        self.noise_level_key = noise_level_key\n\n    def instantiate_low_stage(self, config):\n        model = instantiate_from_config(config)\n        self.low_scale_model = model.eval()\n        self.low_scale_model.train = disabled_train\n        for param in self.low_scale_model.parameters():\n            param.requires_grad = False\n\n    @torch.no_grad()\n    def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False):\n        if not log_mode:\n            z, c = super().get_input(batch, k, force_c_encode=True, bs=bs)\n        else:\n            z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,\n                                                  force_c_encode=True, return_original_cond=True, bs=bs)\n        x_low = batch[self.low_scale_key][:bs]\n        x_low = rearrange(x_low, 'b h w c -> b c h w')\n        x_low = x_low.to(memory_format=torch.contiguous_format).float()\n        zx, noise_level = self.low_scale_model(x_low)\n        if self.noise_level_key is not None:\n            # get noise level from batch instead, e.g. when extracting a custom noise level for bsr\n            raise NotImplementedError('TODO')\n\n        all_conds = {\"c_concat\": [zx], \"c_crossattn\": [c], \"c_adm\": noise_level}\n        if log_mode:\n            # TODO: maybe disable if too expensive\n            x_low_rec = self.low_scale_model.decode(zx)\n            return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level\n        return z, all_conds\n\n    @torch.no_grad()\n    def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,\n                   plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True,\n                   unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True,\n                   **kwargs):\n        ema_scope = self.ema_scope if use_ema_scope else nullcontext\n        use_ddim = ddim_steps is not None\n\n        log = dict()\n        z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N,\n                                                                          log_mode=True)\n        N = min(x.shape[0], N)\n        n_row = min(x.shape[0], n_row)\n        log[\"inputs\"] = x\n        log[\"reconstruction\"] = xrec\n        log[\"x_lr\"] = x_low\n        log[f\"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}\"] = x_low_rec\n        if self.model.conditioning_key is not None:\n            if hasattr(self.cond_stage_model, \"decode\"):\n                xc = self.cond_stage_model.decode(c)\n                log[\"conditioning\"] = xc\n            elif self.cond_stage_key in [\"caption\", \"txt\"]:\n                xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)\n                log[\"conditioning\"] = xc\n            elif self.cond_stage_key in ['class_label', 'cls']:\n                xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[\"human_label\"], size=x.shape[2] // 25)\n                log['conditioning'] = xc\n            elif isimage(xc):\n                log[\"conditioning\"] = xc\n            if ismap(xc):\n                log[\"original_conditioning\"] = self.to_rgb(xc)\n\n        if plot_diffusion_rows:\n            # get diffusion row\n            diffusion_row = list()\n            z_start = z[:n_row]\n            for t in range(self.num_timesteps):\n                if t % self.log_every_t == 0 or t == self.num_timesteps - 1:\n                    t = repeat(torch.tensor([t]), '1 -> b', b=n_row)\n                    t = t.to(self.device).long()\n                    noise = torch.randn_like(z_start)\n                    z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)\n                    diffusion_row.append(self.decode_first_stage(z_noisy))\n\n            diffusion_row = torch.stack(diffusion_row)  # n_log_step, n_row, C, H, W\n            diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')\n            diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')\n            diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])\n            log[\"diffusion_row\"] = diffusion_grid\n\n        if sample:\n            # get denoise row\n            with ema_scope(\"Sampling\"):\n                samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                         ddim_steps=ddim_steps, eta=ddim_eta)\n                # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)\n            x_samples = self.decode_first_stage(samples)\n            log[\"samples\"] = x_samples\n            if plot_denoise_rows:\n                denoise_grid = self._get_denoise_row_from_list(z_denoise_row)\n                log[\"denoise_row\"] = denoise_grid\n\n        if unconditional_guidance_scale > 1.0:\n            uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label)\n            # TODO explore better \"unconditional\" choices for the other keys\n            # maybe guide away from empty text label and highest noise level and maximally degraded zx?\n            uc = dict()\n            for k in c:\n                if k == \"c_crossattn\":\n                    assert isinstance(c[k], list) and len(c[k]) == 1\n                    uc[k] = [uc_tmp]\n                elif k == \"c_adm\":  # todo: only run with text-based guidance?\n                    assert isinstance(c[k], torch.Tensor)\n                    #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level\n                    uc[k] = c[k]\n                elif isinstance(c[k], list):\n                    uc[k] = [c[k][i] for i in range(len(c[k]))]\n                else:\n                    uc[k] = c[k]\n\n            with ema_scope(\"Sampling with classifier-free guidance\"):\n                samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                 ddim_steps=ddim_steps, eta=ddim_eta,\n                                                 unconditional_guidance_scale=unconditional_guidance_scale,\n                                                 unconditional_conditioning=uc,\n                                                 )\n                x_samples_cfg = self.decode_first_stage(samples_cfg)\n                log[f\"samples_cfg_scale_{unconditional_guidance_scale:.2f}\"] = x_samples_cfg\n\n        if plot_progressive_rows:\n            with ema_scope(\"Plotting Progressives\"):\n                img, progressives = self.progressive_denoising(c,\n                                                               shape=(self.channels, self.image_size, self.image_size),\n                                                               batch_size=N)\n            prog_row = self._get_denoise_row_from_list(progressives, desc=\"Progressive Generation\")\n            log[\"progressive_row\"] = prog_row\n\n        return log\n\n\nclass LatentFinetuneDiffusion(LatentDiffusion):\n    \"\"\"\n         Basis for different finetunas, such as inpainting or depth2image\n         To disable finetuning mode, set finetune_keys to None\n    \"\"\"\n\n    def __init__(self,\n                 concat_keys: tuple,\n                 finetune_keys=(\"model.diffusion_model.input_blocks.0.0.weight\",\n                                \"model_ema.diffusion_modelinput_blocks00weight\"\n                                ),\n                 keep_finetune_dims=4,\n                 # if model was trained without concat mode before and we would like to keep these channels\n                 c_concat_log_start=None,  # to log reconstruction of c_concat codes\n                 c_concat_log_end=None,\n                 *args, **kwargs\n                 ):\n        ckpt_path = kwargs.pop(\"ckpt_path\", None)\n        ignore_keys = kwargs.pop(\"ignore_keys\", list())\n        super().__init__(*args, **kwargs)\n        self.finetune_keys = finetune_keys\n        self.concat_keys = concat_keys\n        self.keep_dims = keep_finetune_dims\n        self.c_concat_log_start = c_concat_log_start\n        self.c_concat_log_end = c_concat_log_end\n        if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'\n        if exists(ckpt_path):\n            self.init_from_ckpt(ckpt_path, ignore_keys)\n\n    def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):\n        sd = torch.load(path, map_location=\"cpu\")\n        if \"state_dict\" in list(sd.keys()):\n            sd = sd[\"state_dict\"]\n        keys = list(sd.keys())\n        for k in keys:\n            for ik in ignore_keys:\n                if k.startswith(ik):\n                    print(\"Deleting key {} from state_dict.\".format(k))\n                    del sd[k]\n\n            # make it explicit, finetune by including extra input channels\n            if exists(self.finetune_keys) and k in self.finetune_keys:\n                new_entry = None\n                for name, param in self.named_parameters():\n                    if name in self.finetune_keys:\n                        print(\n                            f\"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only\")\n                        new_entry = torch.zeros_like(param)  # zero init\n                assert exists(new_entry), 'did not find matching parameter to modify'\n                new_entry[:, :self.keep_dims, ...] = sd[k]\n                sd[k] = new_entry\n\n        missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(\n            sd, strict=False)\n        print(f\"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys\")\n        if len(missing) > 0:\n            print(f\"Missing Keys: {missing}\")\n        if len(unexpected) > 0:\n            print(f\"Unexpected Keys: {unexpected}\")\n\n    @torch.no_grad()\n    def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,\n                   quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,\n                   plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,\n                   use_ema_scope=True,\n                   **kwargs):\n        ema_scope = self.ema_scope if use_ema_scope else nullcontext\n        use_ddim = ddim_steps is not None\n\n        log = dict()\n        z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)\n        c_cat, c = c[\"c_concat\"][0], c[\"c_crossattn\"][0]\n        N = min(x.shape[0], N)\n        n_row = min(x.shape[0], n_row)\n        log[\"inputs\"] = x\n        log[\"reconstruction\"] = xrec\n        if self.model.conditioning_key is not None:\n            if hasattr(self.cond_stage_model, \"decode\"):\n                xc = self.cond_stage_model.decode(c)\n                log[\"conditioning\"] = xc\n            elif self.cond_stage_key in [\"caption\", \"txt\"]:\n                xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)\n                log[\"conditioning\"] = xc\n            elif self.cond_stage_key in ['class_label', 'cls']:\n                xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[\"human_label\"], size=x.shape[2] // 25)\n                log['conditioning'] = xc\n            elif isimage(xc):\n                log[\"conditioning\"] = xc\n            if ismap(xc):\n                log[\"original_conditioning\"] = self.to_rgb(xc)\n\n        if not (self.c_concat_log_start is None and self.c_concat_log_end is None):\n            log[\"c_concat_decoded\"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])\n\n        if plot_diffusion_rows:\n            # get diffusion row\n            diffusion_row = list()\n            z_start = z[:n_row]\n            for t in range(self.num_timesteps):\n                if t % self.log_every_t == 0 or t == self.num_timesteps - 1:\n                    t = repeat(torch.tensor([t]), '1 -> b', b=n_row)\n                    t = t.to(self.device).long()\n                    noise = torch.randn_like(z_start)\n                    z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)\n                    diffusion_row.append(self.decode_first_stage(z_noisy))\n\n            diffusion_row = torch.stack(diffusion_row)  # n_log_step, n_row, C, H, W\n            diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')\n            diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')\n            diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])\n            log[\"diffusion_row\"] = diffusion_grid\n\n        if sample:\n            # get denoise row\n            with ema_scope(\"Sampling\"):\n                samples, z_denoise_row = self.sample_log(cond={\"c_concat\": [c_cat], \"c_crossattn\": [c]},\n                                                         batch_size=N, ddim=use_ddim,\n                                                         ddim_steps=ddim_steps, eta=ddim_eta)\n                # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)\n            x_samples = self.decode_first_stage(samples)\n            log[\"samples\"] = x_samples\n            if plot_denoise_rows:\n                denoise_grid = self._get_denoise_row_from_list(z_denoise_row)\n                log[\"denoise_row\"] = denoise_grid\n\n        if unconditional_guidance_scale > 1.0:\n            uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)\n            uc_cat = c_cat\n            uc_full = {\"c_concat\": [uc_cat], \"c_crossattn\": [uc_cross]}\n            with ema_scope(\"Sampling with classifier-free guidance\"):\n                samples_cfg, _ = self.sample_log(cond={\"c_concat\": [c_cat], \"c_crossattn\": [c]},\n                                                 batch_size=N, ddim=use_ddim,\n                                                 ddim_steps=ddim_steps, eta=ddim_eta,\n                                                 unconditional_guidance_scale=unconditional_guidance_scale,\n                                                 unconditional_conditioning=uc_full,\n                                                 )\n                x_samples_cfg = self.decode_first_stage(samples_cfg)\n                log[f\"samples_cfg_scale_{unconditional_guidance_scale:.2f}\"] = x_samples_cfg\n\n        return log\n\n\nclass LatentInpaintDiffusion(LatentFinetuneDiffusion):\n    \"\"\"\n    can either run as pure inpainting model (only concat mode) or with mixed conditionings,\n    e.g. mask as concat and text via cross-attn.\n    To disable finetuning mode, set finetune_keys to None\n     \"\"\"\n\n    def __init__(self,\n                 concat_keys=(\"mask\", \"masked_image\"),\n                 masked_image_key=\"masked_image\",\n                 *args, **kwargs\n                 ):\n        super().__init__(concat_keys, *args, **kwargs)\n        self.masked_image_key = masked_image_key\n        assert self.masked_image_key in concat_keys\n\n    @torch.no_grad()\n    def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):\n        # note: restricted to non-trainable encoders currently\n        assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'\n        z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,\n                                              force_c_encode=True, return_original_cond=True, bs=bs)\n\n        assert exists(self.concat_keys)\n        c_cat = list()\n        for ck in self.concat_keys:\n            cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()\n            if bs is not None:\n                cc = cc[:bs]\n                cc = cc.to(self.device)\n            bchw = z.shape\n            if ck != self.masked_image_key:\n                cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])\n            else:\n                cc = self.get_first_stage_encoding(self.encode_first_stage(cc))\n            c_cat.append(cc)\n        c_cat = torch.cat(c_cat, dim=1)\n        all_conds = {\"c_concat\": [c_cat], \"c_crossattn\": [c]}\n        if return_first_stage_outputs:\n            return z, all_conds, x, xrec, xc\n        return z, all_conds\n\n    @torch.no_grad()\n    def log_images(self, *args, **kwargs):\n        log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)\n        log[\"masked_image\"] = rearrange(args[0][\"masked_image\"],\n                                        'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()\n        return log\n\n\nclass LatentDepth2ImageDiffusion(LatentFinetuneDiffusion):\n    \"\"\"\n    condition on monocular depth estimation\n    \"\"\"\n\n    def __init__(self, depth_stage_config, concat_keys=(\"midas_in\",), *args, **kwargs):\n        super().__init__(concat_keys=concat_keys, *args, **kwargs)\n        self.depth_model = instantiate_from_config(depth_stage_config)\n        self.depth_stage_key = concat_keys[0]\n\n    @torch.no_grad()\n    def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):\n        # note: restricted to non-trainable encoders currently\n        assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img'\n        z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,\n                                              force_c_encode=True, return_original_cond=True, bs=bs)\n\n        assert exists(self.concat_keys)\n        assert len(self.concat_keys) == 1\n        c_cat = list()\n        for ck in self.concat_keys:\n            cc = batch[ck]\n            if bs is not None:\n                cc = cc[:bs]\n                cc = cc.to(self.device)\n            cc = self.depth_model(cc)\n            cc = torch.nn.functional.interpolate(\n                cc,\n                size=z.shape[2:],\n                mode=\"bicubic\",\n                align_corners=False,\n            )\n\n            depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3],\n                                                                                           keepdim=True)\n            cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1.\n            c_cat.append(cc)\n        c_cat = torch.cat(c_cat, dim=1)\n        all_conds = {\"c_concat\": [c_cat], \"c_crossattn\": [c]}\n        if return_first_stage_outputs:\n            return z, all_conds, x, xrec, xc\n        return z, all_conds\n\n    @torch.no_grad()\n    def log_images(self, *args, **kwargs):\n        log = super().log_images(*args, **kwargs)\n        depth = self.depth_model(args[0][self.depth_stage_key])\n        depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \\\n                               torch.amax(depth, dim=[1, 2, 3], keepdim=True)\n        log[\"depth\"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1.\n        return log\n\n\nclass LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion):\n    \"\"\"\n        condition on low-res image (and optionally on some spatial noise augmentation)\n    \"\"\"\n    def __init__(self, concat_keys=(\"lr\",), reshuffle_patch_size=None,\n                 low_scale_config=None, low_scale_key=None, *args, **kwargs):\n        super().__init__(concat_keys=concat_keys, *args, **kwargs)\n        self.reshuffle_patch_size = reshuffle_patch_size\n        self.low_scale_model = None\n        if low_scale_config is not None:\n            print(\"Initializing a low-scale model\")\n            assert exists(low_scale_key)\n            self.instantiate_low_stage(low_scale_config)\n            self.low_scale_key = low_scale_key\n\n    def instantiate_low_stage(self, config):\n        model = instantiate_from_config(config)\n        self.low_scale_model = model.eval()\n        self.low_scale_model.train = disabled_train\n        for param in self.low_scale_model.parameters():\n            param.requires_grad = False\n\n    @torch.no_grad()\n    def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):\n        # note: restricted to non-trainable encoders currently\n        assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft'\n        z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,\n                                              force_c_encode=True, return_original_cond=True, bs=bs)\n\n        assert exists(self.concat_keys)\n        assert len(self.concat_keys) == 1\n        # optionally make spatial noise_level here\n        c_cat = list()\n        noise_level = None\n        for ck in self.concat_keys:\n            cc = batch[ck]\n            cc = rearrange(cc, 'b h w c -> b c h w')\n            if exists(self.reshuffle_patch_size):\n                assert isinstance(self.reshuffle_patch_size, int)\n                cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w',\n                               p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size)\n            if bs is not None:\n                cc = cc[:bs]\n                cc = cc.to(self.device)\n            if exists(self.low_scale_model) and ck == self.low_scale_key:\n                cc, noise_level = self.low_scale_model(cc)\n            c_cat.append(cc)\n        c_cat = torch.cat(c_cat, dim=1)\n        if exists(noise_level):\n            all_conds = {\"c_concat\": [c_cat], \"c_crossattn\": [c], \"c_adm\": noise_level}\n        else:\n            all_conds = {\"c_concat\": [c_cat], \"c_crossattn\": [c]}\n        if return_first_stage_outputs:\n            return z, all_conds, x, xrec, xc\n        return z, all_conds\n\n    @torch.no_grad()\n    def log_images(self, *args, **kwargs):\n        log = super().log_images(*args, **kwargs)\n        log[\"lr\"] = rearrange(args[0][\"lr\"], 'b h w c -> b c h w')\n        return log\n"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/dpm_solver/__init__.py",
    "content": "from .sampler import DPMSolverSampler"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/dpm_solver/dpm_solver.py",
    "content": "import torch\nimport torch.nn.functional as F\nimport math\nfrom tqdm import tqdm\n\n\nclass NoiseScheduleVP:\n    def __init__(\n            self,\n            schedule='discrete',\n            betas=None,\n            alphas_cumprod=None,\n            continuous_beta_0=0.1,\n            continuous_beta_1=20.,\n    ):\n        \"\"\"Create a wrapper class for the forward SDE (VP type).\n        ***\n        Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.\n                We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.\n        ***\n        The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).\n        We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).\n        Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:\n            log_alpha_t = self.marginal_log_mean_coeff(t)\n            sigma_t = self.marginal_std(t)\n            lambda_t = self.marginal_lambda(t)\n        Moreover, as lambda(t) is an invertible function, we also support its inverse function:\n            t = self.inverse_lambda(lambda_t)\n        ===============================================================\n        We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).\n        1. For discrete-time DPMs:\n            For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:\n                t_i = (i + 1) / N\n            e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.\n            We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.\n            Args:\n                betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)\n                alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)\n            Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.\n            **Important**:  Please pay special attention for the args for `alphas_cumprod`:\n                The `alphas_cumprod` is the \\hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that\n                    q_{t_n | 0}(x_{t_n} | x_0) = N ( \\sqrt{\\hat{alpha_n}} * x_0, (1 - \\hat{alpha_n}) * I ).\n                Therefore, the notation \\hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have\n                    alpha_{t_n} = \\sqrt{\\hat{alpha_n}},\n                and\n                    log(alpha_{t_n}) = 0.5 * log(\\hat{alpha_n}).\n        2. For continuous-time DPMs:\n            We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise\n            schedule are the default settings in DDPM and improved-DDPM:\n            Args:\n                beta_min: A `float` number. The smallest beta for the linear schedule.\n                beta_max: A `float` number. The largest beta for the linear schedule.\n                cosine_s: A `float` number. The hyperparameter in the cosine schedule.\n                cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.\n                T: A `float` number. The ending time of the forward process.\n        ===============================================================\n        Args:\n            schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,\n                    'linear' or 'cosine' for continuous-time DPMs.\n        Returns:\n            A wrapper object of the forward SDE (VP type).\n\n        ===============================================================\n        Example:\n        # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):\n        >>> ns = NoiseScheduleVP('discrete', betas=betas)\n        # For discrete-time DPMs, given alphas_cumprod (the \\hat{alpha_n} array for n = 0, 1, ..., N - 1):\n        >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)\n        # For continuous-time DPMs (VPSDE), linear schedule:\n        >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)\n        \"\"\"\n\n        if schedule not in ['discrete', 'linear', 'cosine']:\n            raise ValueError(\n                \"Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'\".format(\n                    schedule))\n\n        self.schedule = schedule\n        if schedule == 'discrete':\n            if betas is not None:\n                log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0)\n            else:\n                assert alphas_cumprod is not None\n                log_alphas = 0.5 * torch.log(alphas_cumprod)\n            self.total_N = len(log_alphas)\n            self.T = 1.\n            self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1))\n            self.log_alpha_array = log_alphas.reshape((1, -1,))\n        else:\n            self.total_N = 1000\n            self.beta_0 = continuous_beta_0\n            self.beta_1 = continuous_beta_1\n            self.cosine_s = 0.008\n            self.cosine_beta_max = 999.\n            self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (\n                        1. + self.cosine_s) / math.pi - self.cosine_s\n            self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.))\n            self.schedule = schedule\n            if schedule == 'cosine':\n                # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T.\n                # Note that T = 0.9946 may be not the optimal setting. However, we find it works well.\n                self.T = 0.9946\n            else:\n                self.T = 1.\n\n    def marginal_log_mean_coeff(self, t):\n        \"\"\"\n        Compute log(alpha_t) of a given continuous-time label t in [0, T].\n        \"\"\"\n        if self.schedule == 'discrete':\n            return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device),\n                                  self.log_alpha_array.to(t.device)).reshape((-1))\n        elif self.schedule == 'linear':\n            return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0\n        elif self.schedule == 'cosine':\n            log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.))\n            log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0\n            return log_alpha_t\n\n    def marginal_alpha(self, t):\n        \"\"\"\n        Compute alpha_t of a given continuous-time label t in [0, T].\n        \"\"\"\n        return torch.exp(self.marginal_log_mean_coeff(t))\n\n    def marginal_std(self, t):\n        \"\"\"\n        Compute sigma_t of a given continuous-time label t in [0, T].\n        \"\"\"\n        return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t)))\n\n    def marginal_lambda(self, t):\n        \"\"\"\n        Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].\n        \"\"\"\n        log_mean_coeff = self.marginal_log_mean_coeff(t)\n        log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff))\n        return log_mean_coeff - log_std\n\n    def inverse_lambda(self, lamb):\n        \"\"\"\n        Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.\n        \"\"\"\n        if self.schedule == 'linear':\n            tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))\n            Delta = self.beta_0 ** 2 + tmp\n            return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0)\n        elif self.schedule == 'discrete':\n            log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb)\n            t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]),\n                               torch.flip(self.t_array.to(lamb.device), [1]))\n            return t.reshape((-1,))\n        else:\n            log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))\n            t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * (\n                        1. + self.cosine_s) / math.pi - self.cosine_s\n            t = t_fn(log_alpha)\n            return t\n\n\ndef model_wrapper(\n        model,\n        noise_schedule,\n        model_type=\"noise\",\n        model_kwargs={},\n        guidance_type=\"uncond\",\n        condition=None,\n        unconditional_condition=None,\n        guidance_scale=1.,\n        classifier_fn=None,\n        classifier_kwargs={},\n):\n    \"\"\"Create a wrapper function for the noise prediction model.\n    DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to\n    firstly wrap the model function to a noise prediction model that accepts the continuous time as the input.\n    We support four types of the diffusion model by setting `model_type`:\n        1. \"noise\": noise prediction model. (Trained by predicting noise).\n        2. \"x_start\": data prediction model. (Trained by predicting the data x_0 at time 0).\n        3. \"v\": velocity prediction model. (Trained by predicting the velocity).\n            The \"v\" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2].\n            [1] Salimans, Tim, and Jonathan Ho. \"Progressive distillation for fast sampling of diffusion models.\"\n                arXiv preprint arXiv:2202.00512 (2022).\n            [2] Ho, Jonathan, et al. \"Imagen Video: High Definition Video Generation with Diffusion Models.\"\n                arXiv preprint arXiv:2210.02303 (2022).\n\n        4. \"score\": marginal score function. (Trained by denoising score matching).\n            Note that the score function and the noise prediction model follows a simple relationship:\n            ```\n                noise(x_t, t) = -sigma_t * score(x_t, t)\n            ```\n    We support three types of guided sampling by DPMs by setting `guidance_type`:\n        1. \"uncond\": unconditional sampling by DPMs.\n            The input `model` has the following format:\n            ``\n                model(x, t_input, **model_kwargs) -> noise | x_start | v | score\n            ``\n        2. \"classifier\": classifier guidance sampling [3] by DPMs and another classifier.\n            The input `model` has the following format:\n            ``\n                model(x, t_input, **model_kwargs) -> noise | x_start | v | score\n            ``\n            The input `classifier_fn` has the following format:\n            ``\n                classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond)\n            ``\n            [3] P. Dhariwal and A. Q. Nichol, \"Diffusion models beat GANs on image synthesis,\"\n                in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794.\n        3. \"classifier-free\": classifier-free guidance sampling by conditional DPMs.\n            The input `model` has the following format:\n            ``\n                model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score\n            ``\n            And if cond == `unconditional_condition`, the model output is the unconditional DPM output.\n            [4] Ho, Jonathan, and Tim Salimans. \"Classifier-free diffusion guidance.\"\n                arXiv preprint arXiv:2207.12598 (2022).\n\n    The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999)\n    or continuous-time labels (i.e. epsilon to T).\n    We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise:\n    ``\n        def model_fn(x, t_continuous) -> noise:\n            t_input = get_model_input_time(t_continuous)\n            return noise_pred(model, x, t_input, **model_kwargs)\n    ``\n    where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver.\n    ===============================================================\n    Args:\n        model: A diffusion model with the corresponding format described above.\n        noise_schedule: A noise schedule object, such as NoiseScheduleVP.\n        model_type: A `str`. The parameterization type of the diffusion model.\n                    \"noise\" or \"x_start\" or \"v\" or \"score\".\n        model_kwargs: A `dict`. A dict for the other inputs of the model function.\n        guidance_type: A `str`. The type of the guidance for sampling.\n                    \"uncond\" or \"classifier\" or \"classifier-free\".\n        condition: A pytorch tensor. The condition for the guided sampling.\n                    Only used for \"classifier\" or \"classifier-free\" guidance type.\n        unconditional_condition: A pytorch tensor. The condition for the unconditional sampling.\n                    Only used for \"classifier-free\" guidance type.\n        guidance_scale: A `float`. The scale for the guided sampling.\n        classifier_fn: A classifier function. Only used for the classifier guidance.\n        classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function.\n    Returns:\n        A noise prediction model that accepts the noised data and the continuous time as the inputs.\n    \"\"\"\n\n    def get_model_input_time(t_continuous):\n        \"\"\"\n        Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.\n        For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].\n        For continuous-time DPMs, we just use `t_continuous`.\n        \"\"\"\n        if noise_schedule.schedule == 'discrete':\n            return (t_continuous - 1. / noise_schedule.total_N) * 1000.\n        else:\n            return t_continuous\n\n    def noise_pred_fn(x, t_continuous, cond=None):\n        if t_continuous.reshape((-1,)).shape[0] == 1:\n            t_continuous = t_continuous.expand((x.shape[0]))\n        t_input = get_model_input_time(t_continuous)\n        if cond is None:\n            output = model(x, t_input, **model_kwargs)\n        else:\n            output = model(x, t_input, cond, **model_kwargs)\n        if model_type == \"noise\":\n            return output\n        elif model_type == \"x_start\":\n            alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)\n            dims = x.dim()\n            return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims)\n        elif model_type == \"v\":\n            alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)\n            dims = x.dim()\n            return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x\n        elif model_type == \"score\":\n            sigma_t = noise_schedule.marginal_std(t_continuous)\n            dims = x.dim()\n            return -expand_dims(sigma_t, dims) * output\n\n    def cond_grad_fn(x, t_input):\n        \"\"\"\n        Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).\n        \"\"\"\n        with torch.enable_grad():\n            x_in = x.detach().requires_grad_(True)\n            log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs)\n            return torch.autograd.grad(log_prob.sum(), x_in)[0]\n\n    def model_fn(x, t_continuous):\n        \"\"\"\n        The noise predicition model function that is used for DPM-Solver.\n        \"\"\"\n        if t_continuous.reshape((-1,)).shape[0] == 1:\n            t_continuous = t_continuous.expand((x.shape[0]))\n        if guidance_type == \"uncond\":\n            return noise_pred_fn(x, t_continuous)\n        elif guidance_type == \"classifier\":\n            assert classifier_fn is not None\n            t_input = get_model_input_time(t_continuous)\n            cond_grad = cond_grad_fn(x, t_input)\n            sigma_t = noise_schedule.marginal_std(t_continuous)\n            noise = noise_pred_fn(x, t_continuous)\n            return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad\n        elif guidance_type == \"classifier-free\":\n            if guidance_scale == 1. or unconditional_condition is None:\n                return noise_pred_fn(x, t_continuous, cond=condition)\n            else:\n                x_in = torch.cat([x] * 2)\n                t_in = torch.cat([t_continuous] * 2)\n                c_in = torch.cat([unconditional_condition, condition])\n                noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2)\n                return noise_uncond + guidance_scale * (noise - noise_uncond)\n\n    assert model_type in [\"noise\", \"x_start\", \"v\"]\n    assert guidance_type in [\"uncond\", \"classifier\", \"classifier-free\"]\n    return model_fn\n\n\nclass DPM_Solver:\n    def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.):\n        \"\"\"Construct a DPM-Solver.\n        We support both the noise prediction model (\"predicting epsilon\") and the data prediction model (\"predicting x0\").\n        If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver).\n        If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++).\n            In such case, we further support the \"dynamic thresholding\" in [1] when `thresholding` is True.\n            The \"dynamic thresholding\" can greatly improve the sample quality for pixel-space DPMs with large guidance scales.\n        Args:\n            model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]):\n                ``\n                def model_fn(x, t_continuous):\n                    return noise\n                ``\n            noise_schedule: A noise schedule object, such as NoiseScheduleVP.\n            predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model.\n            thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the \"dynamic thresholding\" in [1].\n            max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding.\n\n        [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b.\n        \"\"\"\n        self.model = model_fn\n        self.noise_schedule = noise_schedule\n        self.predict_x0 = predict_x0\n        self.thresholding = thresholding\n        self.max_val = max_val\n\n    def noise_prediction_fn(self, x, t):\n        \"\"\"\n        Return the noise prediction model.\n        \"\"\"\n        return self.model(x, t)\n\n    def data_prediction_fn(self, x, t):\n        \"\"\"\n        Return the data prediction model (with thresholding).\n        \"\"\"\n        noise = self.noise_prediction_fn(x, t)\n        dims = x.dim()\n        alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)\n        x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims)\n        if self.thresholding:\n            p = 0.995  # A hyperparameter in the paper of \"Imagen\" [1].\n            s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1)\n            s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims)\n            x0 = torch.clamp(x0, -s, s) / s\n        return x0\n\n    def model_fn(self, x, t):\n        \"\"\"\n        Convert the model to the noise prediction model or the data prediction model.\n        \"\"\"\n        if self.predict_x0:\n            return self.data_prediction_fn(x, t)\n        else:\n            return self.noise_prediction_fn(x, t)\n\n    def get_time_steps(self, skip_type, t_T, t_0, N, device):\n        \"\"\"Compute the intermediate time steps for sampling.\n        Args:\n            skip_type: A `str`. The type for the spacing of the time steps. We support three types:\n                - 'logSNR': uniform logSNR for the time steps.\n                - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)\n                - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)\n            t_T: A `float`. The starting time of the sampling (default is T).\n            t_0: A `float`. The ending time of the sampling (default is epsilon).\n            N: A `int`. The total number of the spacing of the time steps.\n            device: A torch device.\n        Returns:\n            A pytorch tensor of the time steps, with the shape (N + 1,).\n        \"\"\"\n        if skip_type == 'logSNR':\n            lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device))\n            lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device))\n            logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device)\n            return self.noise_schedule.inverse_lambda(logSNR_steps)\n        elif skip_type == 'time_uniform':\n            return torch.linspace(t_T, t_0, N + 1).to(device)\n        elif skip_type == 'time_quadratic':\n            t_order = 2\n            t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device)\n            return t\n        else:\n            raise ValueError(\n                \"Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'\".format(skip_type))\n\n    def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device):\n        \"\"\"\n        Get the order of each step for sampling by the singlestep DPM-Solver.\n        We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as \"DPM-Solver-fast\".\n        Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is:\n            - If order == 1:\n                We take `steps` of DPM-Solver-1 (i.e. DDIM).\n            - If order == 2:\n                - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling.\n                - If steps % 2 == 0, we use K steps of DPM-Solver-2.\n                - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1.\n            - If order == 3:\n                - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.\n                - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1.\n                - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1.\n                - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2.\n        ============================================\n        Args:\n            order: A `int`. The max order for the solver (2 or 3).\n            steps: A `int`. The total number of function evaluations (NFE).\n            skip_type: A `str`. The type for the spacing of the time steps. We support three types:\n                - 'logSNR': uniform logSNR for the time steps.\n                - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)\n                - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)\n            t_T: A `float`. The starting time of the sampling (default is T).\n            t_0: A `float`. The ending time of the sampling (default is epsilon).\n            device: A torch device.\n        Returns:\n            orders: A list of the solver order of each step.\n        \"\"\"\n        if order == 3:\n            K = steps // 3 + 1\n            if steps % 3 == 0:\n                orders = [3, ] * (K - 2) + [2, 1]\n            elif steps % 3 == 1:\n                orders = [3, ] * (K - 1) + [1]\n            else:\n                orders = [3, ] * (K - 1) + [2]\n        elif order == 2:\n            if steps % 2 == 0:\n                K = steps // 2\n                orders = [2, ] * K\n            else:\n                K = steps // 2 + 1\n                orders = [2, ] * (K - 1) + [1]\n        elif order == 1:\n            K = 1\n            orders = [1, ] * steps\n        else:\n            raise ValueError(\"'order' must be '1' or '2' or '3'.\")\n        if skip_type == 'logSNR':\n            # To reproduce the results in DPM-Solver paper\n            timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device)\n        else:\n            timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[\n                torch.cumsum(torch.tensor([0, ] + orders)).to(device)]\n        return timesteps_outer, orders\n\n    def denoise_to_zero_fn(self, x, s):\n        \"\"\"\n        Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization.\n        \"\"\"\n        return self.data_prediction_fn(x, s)\n\n    def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False):\n        \"\"\"\n        DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            s: A pytorch tensor. The starting time, with the shape (x.shape[0],).\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            model_s: A pytorch tensor. The model function evaluated at time `s`.\n                If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.\n            return_intermediate: A `bool`. If true, also return the model value at time `s`.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        ns = self.noise_schedule\n        dims = x.dim()\n        lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)\n        h = lambda_t - lambda_s\n        log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t)\n        sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t)\n        alpha_t = torch.exp(log_alpha_t)\n\n        if self.predict_x0:\n            phi_1 = torch.expm1(-h)\n            if model_s is None:\n                model_s = self.model_fn(x, s)\n            x_t = (\n                    expand_dims(sigma_t / sigma_s, dims) * x\n                    - expand_dims(alpha_t * phi_1, dims) * model_s\n            )\n            if return_intermediate:\n                return x_t, {'model_s': model_s}\n            else:\n                return x_t\n        else:\n            phi_1 = torch.expm1(h)\n            if model_s is None:\n                model_s = self.model_fn(x, s)\n            x_t = (\n                    expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x\n                    - expand_dims(sigma_t * phi_1, dims) * model_s\n            )\n            if return_intermediate:\n                return x_t, {'model_s': model_s}\n            else:\n                return x_t\n\n    def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False,\n                                            solver_type='dpm_solver'):\n        \"\"\"\n        Singlestep solver DPM-Solver-2 from time `s` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            s: A pytorch tensor. The starting time, with the shape (x.shape[0],).\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            r1: A `float`. The hyperparameter of the second-order solver.\n            model_s: A pytorch tensor. The model function evaluated at time `s`.\n                If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.\n            return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time).\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        if solver_type not in ['dpm_solver', 'taylor']:\n            raise ValueError(\"'solver_type' must be either 'dpm_solver' or 'taylor', got {}\".format(solver_type))\n        if r1 is None:\n            r1 = 0.5\n        ns = self.noise_schedule\n        dims = x.dim()\n        lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)\n        h = lambda_t - lambda_s\n        lambda_s1 = lambda_s + r1 * h\n        s1 = ns.inverse_lambda(lambda_s1)\n        log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(\n            s1), ns.marginal_log_mean_coeff(t)\n        sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t)\n        alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t)\n\n        if self.predict_x0:\n            phi_11 = torch.expm1(-r1 * h)\n            phi_1 = torch.expm1(-h)\n\n            if model_s is None:\n                model_s = self.model_fn(x, s)\n            x_s1 = (\n                    expand_dims(sigma_s1 / sigma_s, dims) * x\n                    - expand_dims(alpha_s1 * phi_11, dims) * model_s\n            )\n            model_s1 = self.model_fn(x_s1, s1)\n            if solver_type == 'dpm_solver':\n                x_t = (\n                        expand_dims(sigma_t / sigma_s, dims) * x\n                        - expand_dims(alpha_t * phi_1, dims) * model_s\n                        - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s)\n                )\n            elif solver_type == 'taylor':\n                x_t = (\n                        expand_dims(sigma_t / sigma_s, dims) * x\n                        - expand_dims(alpha_t * phi_1, dims) * model_s\n                        + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * (\n                                    model_s1 - model_s)\n                )\n        else:\n            phi_11 = torch.expm1(r1 * h)\n            phi_1 = torch.expm1(h)\n\n            if model_s is None:\n                model_s = self.model_fn(x, s)\n            x_s1 = (\n                    expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x\n                    - expand_dims(sigma_s1 * phi_11, dims) * model_s\n            )\n            model_s1 = self.model_fn(x_s1, s1)\n            if solver_type == 'dpm_solver':\n                x_t = (\n                        expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x\n                        - expand_dims(sigma_t * phi_1, dims) * model_s\n                        - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s)\n                )\n            elif solver_type == 'taylor':\n                x_t = (\n                        expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x\n                        - expand_dims(sigma_t * phi_1, dims) * model_s\n                        - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s)\n                )\n        if return_intermediate:\n            return x_t, {'model_s': model_s, 'model_s1': model_s1}\n        else:\n            return x_t\n\n    def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None,\n                                           return_intermediate=False, solver_type='dpm_solver'):\n        \"\"\"\n        Singlestep solver DPM-Solver-3 from time `s` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            s: A pytorch tensor. The starting time, with the shape (x.shape[0],).\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            r1: A `float`. The hyperparameter of the third-order solver.\n            r2: A `float`. The hyperparameter of the third-order solver.\n            model_s: A pytorch tensor. The model function evaluated at time `s`.\n                If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.\n            model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`).\n                If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it.\n            return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        if solver_type not in ['dpm_solver', 'taylor']:\n            raise ValueError(\"'solver_type' must be either 'dpm_solver' or 'taylor', got {}\".format(solver_type))\n        if r1 is None:\n            r1 = 1. / 3.\n        if r2 is None:\n            r2 = 2. / 3.\n        ns = self.noise_schedule\n        dims = x.dim()\n        lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)\n        h = lambda_t - lambda_s\n        lambda_s1 = lambda_s + r1 * h\n        lambda_s2 = lambda_s + r2 * h\n        s1 = ns.inverse_lambda(lambda_s1)\n        s2 = ns.inverse_lambda(lambda_s2)\n        log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff(\n            s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t)\n        sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(\n            s2), ns.marginal_std(t)\n        alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t)\n\n        if self.predict_x0:\n            phi_11 = torch.expm1(-r1 * h)\n            phi_12 = torch.expm1(-r2 * h)\n            phi_1 = torch.expm1(-h)\n            phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1.\n            phi_2 = phi_1 / h + 1.\n            phi_3 = phi_2 / h - 0.5\n\n            if model_s is None:\n                model_s = self.model_fn(x, s)\n            if model_s1 is None:\n                x_s1 = (\n                        expand_dims(sigma_s1 / sigma_s, dims) * x\n                        - expand_dims(alpha_s1 * phi_11, dims) * model_s\n                )\n                model_s1 = self.model_fn(x_s1, s1)\n            x_s2 = (\n                    expand_dims(sigma_s2 / sigma_s, dims) * x\n                    - expand_dims(alpha_s2 * phi_12, dims) * model_s\n                    + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s)\n            )\n            model_s2 = self.model_fn(x_s2, s2)\n            if solver_type == 'dpm_solver':\n                x_t = (\n                        expand_dims(sigma_t / sigma_s, dims) * x\n                        - expand_dims(alpha_t * phi_1, dims) * model_s\n                        + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s)\n                )\n            elif solver_type == 'taylor':\n                D1_0 = (1. / r1) * (model_s1 - model_s)\n                D1_1 = (1. / r2) * (model_s2 - model_s)\n                D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)\n                D2 = 2. * (D1_1 - D1_0) / (r2 - r1)\n                x_t = (\n                        expand_dims(sigma_t / sigma_s, dims) * x\n                        - expand_dims(alpha_t * phi_1, dims) * model_s\n                        + expand_dims(alpha_t * phi_2, dims) * D1\n                        - expand_dims(alpha_t * phi_3, dims) * D2\n                )\n        else:\n            phi_11 = torch.expm1(r1 * h)\n            phi_12 = torch.expm1(r2 * h)\n            phi_1 = torch.expm1(h)\n            phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1.\n            phi_2 = phi_1 / h - 1.\n            phi_3 = phi_2 / h - 0.5\n\n            if model_s is None:\n                model_s = self.model_fn(x, s)\n            if model_s1 is None:\n                x_s1 = (\n                        expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x\n                        - expand_dims(sigma_s1 * phi_11, dims) * model_s\n                )\n                model_s1 = self.model_fn(x_s1, s1)\n            x_s2 = (\n                    expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x\n                    - expand_dims(sigma_s2 * phi_12, dims) * model_s\n                    - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s)\n            )\n            model_s2 = self.model_fn(x_s2, s2)\n            if solver_type == 'dpm_solver':\n                x_t = (\n                        expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x\n                        - expand_dims(sigma_t * phi_1, dims) * model_s\n                        - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s)\n                )\n            elif solver_type == 'taylor':\n                D1_0 = (1. / r1) * (model_s1 - model_s)\n                D1_1 = (1. / r2) * (model_s2 - model_s)\n                D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)\n                D2 = 2. * (D1_1 - D1_0) / (r2 - r1)\n                x_t = (\n                        expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x\n                        - expand_dims(sigma_t * phi_1, dims) * model_s\n                        - expand_dims(sigma_t * phi_2, dims) * D1\n                        - expand_dims(sigma_t * phi_3, dims) * D2\n                )\n\n        if return_intermediate:\n            return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2}\n        else:\n            return x_t\n\n    def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type=\"dpm_solver\"):\n        \"\"\"\n        Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            model_prev_list: A list of pytorch tensor. The previous computed model values.\n            t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        if solver_type not in ['dpm_solver', 'taylor']:\n            raise ValueError(\"'solver_type' must be either 'dpm_solver' or 'taylor', got {}\".format(solver_type))\n        ns = self.noise_schedule\n        dims = x.dim()\n        model_prev_1, model_prev_0 = model_prev_list\n        t_prev_1, t_prev_0 = t_prev_list\n        lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda(\n            t_prev_0), ns.marginal_lambda(t)\n        log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)\n        sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)\n        alpha_t = torch.exp(log_alpha_t)\n\n        h_0 = lambda_prev_0 - lambda_prev_1\n        h = lambda_t - lambda_prev_0\n        r0 = h_0 / h\n        D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)\n        if self.predict_x0:\n            if solver_type == 'dpm_solver':\n                x_t = (\n                        expand_dims(sigma_t / sigma_prev_0, dims) * x\n                        - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0\n                        - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0\n                )\n            elif solver_type == 'taylor':\n                x_t = (\n                        expand_dims(sigma_t / sigma_prev_0, dims) * x\n                        - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0\n                        + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0\n                )\n        else:\n            if solver_type == 'dpm_solver':\n                x_t = (\n                        expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x\n                        - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0\n                        - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0\n                )\n            elif solver_type == 'taylor':\n                x_t = (\n                        expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x\n                        - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0\n                        - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0\n                )\n        return x_t\n\n    def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'):\n        \"\"\"\n        Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            model_prev_list: A list of pytorch tensor. The previous computed model values.\n            t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        ns = self.noise_schedule\n        dims = x.dim()\n        model_prev_2, model_prev_1, model_prev_0 = model_prev_list\n        t_prev_2, t_prev_1, t_prev_0 = t_prev_list\n        lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda(\n            t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t)\n        log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)\n        sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)\n        alpha_t = torch.exp(log_alpha_t)\n\n        h_1 = lambda_prev_1 - lambda_prev_2\n        h_0 = lambda_prev_0 - lambda_prev_1\n        h = lambda_t - lambda_prev_0\n        r0, r1 = h_0 / h, h_1 / h\n        D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)\n        D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2)\n        D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1)\n        D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1)\n        if self.predict_x0:\n            x_t = (\n                    expand_dims(sigma_t / sigma_prev_0, dims) * x\n                    - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0\n                    + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1\n                    - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2\n            )\n        else:\n            x_t = (\n                    expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x\n                    - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0\n                    - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1\n                    - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2\n            )\n        return x_t\n\n    def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None,\n                                     r2=None):\n        \"\"\"\n        Singlestep DPM-Solver with the order `order` from time `s` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            s: A pytorch tensor. The starting time, with the shape (x.shape[0],).\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.\n            return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n            r1: A `float`. The hyperparameter of the second-order or third-order solver.\n            r2: A `float`. The hyperparameter of the third-order solver.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        if order == 1:\n            return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate)\n        elif order == 2:\n            return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate,\n                                                            solver_type=solver_type, r1=r1)\n        elif order == 3:\n            return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate,\n                                                           solver_type=solver_type, r1=r1, r2=r2)\n        else:\n            raise ValueError(\"Solver order must be 1 or 2 or 3, got {}\".format(order))\n\n    def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'):\n        \"\"\"\n        Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`.\n        Args:\n            x: A pytorch tensor. The initial value at time `s`.\n            model_prev_list: A list of pytorch tensor. The previous computed model values.\n            t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)\n            t: A pytorch tensor. The ending time, with the shape (x.shape[0],).\n            order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n        Returns:\n            x_t: A pytorch tensor. The approximated solution at time `t`.\n        \"\"\"\n        if order == 1:\n            return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1])\n        elif order == 2:\n            return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)\n        elif order == 3:\n            return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)\n        else:\n            raise ValueError(\"Solver order must be 1 or 2 or 3, got {}\".format(order))\n\n    def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5,\n                            solver_type='dpm_solver'):\n        \"\"\"\n        The adaptive step size solver based on singlestep DPM-Solver.\n        Args:\n            x: A pytorch tensor. The initial value at time `t_T`.\n            order: A `int`. The (higher) order of the solver. We only support order == 2 or 3.\n            t_T: A `float`. The starting time of the sampling (default is T).\n            t_0: A `float`. The ending time of the sampling (default is epsilon).\n            h_init: A `float`. The initial step size (for logSNR).\n            atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1].\n            rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05.\n            theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1].\n            t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the\n                current time and `t_0` is less than `t_err`. The default setting is 1e-5.\n            solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.\n                The type slightly impacts the performance. We recommend to use 'dpm_solver' type.\n        Returns:\n            x_0: A pytorch tensor. The approximated solution at time `t_0`.\n        [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, \"Gotta go fast when generating data with score-based models,\" arXiv preprint arXiv:2105.14080, 2021.\n        \"\"\"\n        ns = self.noise_schedule\n        s = t_T * torch.ones((x.shape[0],)).to(x)\n        lambda_s = ns.marginal_lambda(s)\n        lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x))\n        h = h_init * torch.ones_like(s).to(x)\n        x_prev = x\n        nfe = 0\n        if order == 2:\n            r1 = 0.5\n            lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True)\n            higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1,\n                                                                                               solver_type=solver_type,\n                                                                                               **kwargs)\n        elif order == 3:\n            r1, r2 = 1. / 3., 2. / 3.\n            lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1,\n                                                                                    return_intermediate=True,\n                                                                                    solver_type=solver_type)\n            higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2,\n                                                                                              solver_type=solver_type,\n                                                                                              **kwargs)\n        else:\n            raise ValueError(\"For adaptive step size solver, order must be 2 or 3, got {}\".format(order))\n        while torch.abs((s - t_0)).mean() > t_err:\n            t = ns.inverse_lambda(lambda_s + h)\n            x_lower, lower_noise_kwargs = lower_update(x, s, t)\n            x_higher = higher_update(x, s, t, **lower_noise_kwargs)\n            delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev)))\n            norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True))\n            E = norm_fn((x_higher - x_lower) / delta).max()\n            if torch.all(E <= 1.):\n                x = x_higher\n                s = t\n                x_prev = x_lower\n                lambda_s = ns.marginal_lambda(s)\n            h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s)\n            nfe += order\n        print('adaptive solver nfe', nfe)\n        return x\n\n    def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform',\n               method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver',\n               atol=0.0078, rtol=0.05,\n               ):\n        \"\"\"\n        Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`.\n        =====================================================\n        We support the following algorithms for both noise prediction model and data prediction model:\n            - 'singlestep':\n                Singlestep DPM-Solver (i.e. \"DPM-Solver-fast\" in the paper), which combines different orders of singlestep DPM-Solver.\n                We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps).\n                The total number of function evaluations (NFE) == `steps`.\n                Given a fixed NFE == `steps`, the sampling procedure is:\n                    - If `order` == 1:\n                        - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM).\n                    - If `order` == 2:\n                        - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling.\n                        - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2.\n                        - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.\n                    - If `order` == 3:\n                        - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.\n                        - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.\n                        - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1.\n                        - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2.\n            - 'multistep':\n                Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`.\n                We initialize the first `order` values by lower order multistep solvers.\n                Given a fixed NFE == `steps`, the sampling procedure is:\n                    Denote K = steps.\n                    - If `order` == 1:\n                        - We use K steps of DPM-Solver-1 (i.e. DDIM).\n                    - If `order` == 2:\n                        - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2.\n                    - If `order` == 3:\n                        - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3.\n            - 'singlestep_fixed':\n                Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3).\n                We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE.\n            - 'adaptive':\n                Adaptive step size DPM-Solver (i.e. \"DPM-Solver-12\" and \"DPM-Solver-23\" in the paper).\n                We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`.\n                You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs\n                (NFE) and the sample quality.\n                    - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2.\n                    - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3.\n        =====================================================\n        Some advices for choosing the algorithm:\n            - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs:\n                Use singlestep DPM-Solver (\"DPM-Solver-fast\" in the paper) with `order = 3`.\n                e.g.\n                    >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False)\n                    >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3,\n                            skip_type='time_uniform', method='singlestep')\n            - For **guided sampling with large guidance scale** by DPMs:\n                Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`.\n                e.g.\n                    >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True)\n                    >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2,\n                            skip_type='time_uniform', method='multistep')\n        We support three types of `skip_type`:\n            - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images**\n            - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**.\n            - 'time_quadratic': quadratic time for the time steps.\n        =====================================================\n        Args:\n            x: A pytorch tensor. The initial value at time `t_start`\n                e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution.\n            steps: A `int`. The total number of function evaluations (NFE).\n            t_start: A `float`. The starting time of the sampling.\n                If `T` is None, we use self.noise_schedule.T (default is 1.0).\n            t_end: A `float`. The ending time of the sampling.\n                If `t_end` is None, we use 1. / self.noise_schedule.total_N.\n                e.g. if total_N == 1000, we have `t_end` == 1e-3.\n                For discrete-time DPMs:\n                    - We recommend `t_end` == 1. / self.noise_schedule.total_N.\n                For continuous-time DPMs:\n                    - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15.\n            order: A `int`. The order of DPM-Solver.\n            skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'.\n            method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'.\n            denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step.\n                Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1).\n                This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and\n                score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID\n                for diffusion models sampling by diffusion SDEs for low-resolutional images\n                (such as CIFAR-10). However, we observed that such trick does not matter for\n                high-resolutional images. As it needs an additional NFE, we do not recommend\n                it for high-resolutional images.\n            lower_order_final: A `bool`. Whether to use lower order solvers at the final steps.\n                Only valid for `method=multistep` and `steps < 15`. We empirically find that\n                this trick is a key to stabilizing the sampling by DPM-Solver with very few steps\n                (especially for steps <= 10). So we recommend to set it to be `True`.\n            solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`.\n            atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.\n            rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.\n        Returns:\n            x_end: A pytorch tensor. The approximated solution at time `t_end`.\n        \"\"\"\n        t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end\n        t_T = self.noise_schedule.T if t_start is None else t_start\n        device = x.device\n        if method == 'adaptive':\n            with torch.no_grad():\n                x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol,\n                                             solver_type=solver_type)\n        elif method == 'multistep':\n            assert steps >= order\n            timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device)\n            assert timesteps.shape[0] - 1 == steps\n            with torch.no_grad():\n                vec_t = timesteps[0].expand((x.shape[0]))\n                model_prev_list = [self.model_fn(x, vec_t)]\n                t_prev_list = [vec_t]\n                # Init the first `order` values by lower order multistep DPM-Solver.\n                for init_order in tqdm(range(1, order), desc=\"DPM init order\"):\n                    vec_t = timesteps[init_order].expand(x.shape[0])\n                    x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order,\n                                                         solver_type=solver_type)\n                    model_prev_list.append(self.model_fn(x, vec_t))\n                    t_prev_list.append(vec_t)\n                # Compute the remaining values by `order`-th order multistep DPM-Solver.\n                for step in tqdm(range(order, steps + 1), desc=\"DPM multistep\"):\n                    vec_t = timesteps[step].expand(x.shape[0])\n                    if lower_order_final and steps < 15:\n                        step_order = min(order, steps + 1 - step)\n                    else:\n                        step_order = order\n                    x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order,\n                                                         solver_type=solver_type)\n                    for i in range(order - 1):\n                        t_prev_list[i] = t_prev_list[i + 1]\n                        model_prev_list[i] = model_prev_list[i + 1]\n                    t_prev_list[-1] = vec_t\n                    # We do not need to evaluate the final model value.\n                    if step < steps:\n                        model_prev_list[-1] = self.model_fn(x, vec_t)\n        elif method in ['singlestep', 'singlestep_fixed']:\n            if method == 'singlestep':\n                timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order,\n                                                                                              skip_type=skip_type,\n                                                                                              t_T=t_T, t_0=t_0,\n                                                                                              device=device)\n            elif method == 'singlestep_fixed':\n                K = steps // order\n                orders = [order, ] * K\n                timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device)\n            for i, order in enumerate(orders):\n                t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1]\n                timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(),\n                                                      N=order, device=device)\n                lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner)\n                vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0])\n                h = lambda_inner[-1] - lambda_inner[0]\n                r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h\n                r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h\n                x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2)\n        if denoise_to_zero:\n            x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0)\n        return x\n\n\n#############################################################\n# other utility functions\n#############################################################\n\ndef interpolate_fn(x, xp, yp):\n    \"\"\"\n    A piecewise linear function y = f(x), using xp and yp as keypoints.\n    We implement f(x) in a differentiable way (i.e. applicable for autograd).\n    The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.)\n    Args:\n        x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver).\n        xp: PyTorch tensor with shape [C, K], where K is the number of keypoints.\n        yp: PyTorch tensor with shape [C, K].\n    Returns:\n        The function values f(x), with shape [N, C].\n    \"\"\"\n    N, K = x.shape[0], xp.shape[1]\n    all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2)\n    sorted_all_x, x_indices = torch.sort(all_x, dim=2)\n    x_idx = torch.argmin(x_indices, dim=2)\n    cand_start_idx = x_idx - 1\n    start_idx = torch.where(\n        torch.eq(x_idx, 0),\n        torch.tensor(1, device=x.device),\n        torch.where(\n            torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,\n        ),\n    )\n    end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1)\n    start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2)\n    end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2)\n    start_idx2 = torch.where(\n        torch.eq(x_idx, 0),\n        torch.tensor(0, device=x.device),\n        torch.where(\n            torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,\n        ),\n    )\n    y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1)\n    start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2)\n    end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2)\n    cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x)\n    return cand\n\n\ndef expand_dims(v, dims):\n    \"\"\"\n    Expand the tensor `v` to the dim `dims`.\n    Args:\n        `v`: a PyTorch tensor with shape [N].\n        `dim`: a `int`.\n    Returns:\n        a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`.\n    \"\"\"\n    return v[(...,) + (None,) * (dims - 1)]"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/dpm_solver/sampler.py",
    "content": "\"\"\"SAMPLING ONLY.\"\"\"\nimport torch\n\nfrom .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver\n\n\nMODEL_TYPES = {\n    \"eps\": \"noise\",\n    \"v\": \"v\"\n}\n\n\nclass DPMSolverSampler(object):\n    def __init__(self, model, **kwargs):\n        super().__init__()\n        self.model = model\n        to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device)\n        self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod))\n\n    def register_buffer(self, name, attr):\n        if type(attr) == torch.Tensor:\n            if attr.device != torch.device(\"cuda\"):\n                attr = attr.to(torch.device(\"cuda\"))\n        setattr(self, name, attr)\n\n    @torch.no_grad()\n    def sample(self,\n               S,\n               batch_size,\n               shape,\n               conditioning=None,\n               callback=None,\n               normals_sequence=None,\n               img_callback=None,\n               quantize_x0=False,\n               eta=0.,\n               mask=None,\n               x0=None,\n               temperature=1.,\n               noise_dropout=0.,\n               score_corrector=None,\n               corrector_kwargs=None,\n               verbose=True,\n               x_T=None,\n               log_every_t=100,\n               unconditional_guidance_scale=1.,\n               unconditional_conditioning=None,\n               # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...\n               **kwargs\n               ):\n        if conditioning is not None:\n            if isinstance(conditioning, dict):\n                cbs = conditioning[list(conditioning.keys())[0]].shape[0]\n                if cbs != batch_size:\n                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n            else:\n                if conditioning.shape[0] != batch_size:\n                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n\n        # sampling\n        C, H, W = shape\n        size = (batch_size, C, H, W)\n\n        print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}')\n\n        device = self.model.betas.device\n        if x_T is None:\n            img = torch.randn(size, device=device)\n        else:\n            img = x_T\n\n        ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod)\n\n        model_fn = model_wrapper(\n            lambda x, t, c: self.model.apply_model(x, t, c),\n            ns,\n            model_type=MODEL_TYPES[self.model.parameterization],\n            guidance_type=\"classifier-free\",\n            condition=conditioning,\n            unconditional_condition=unconditional_conditioning,\n            guidance_scale=unconditional_guidance_scale,\n        )\n\n        dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False)\n        x = dpm_solver.sample(img, steps=S, skip_type=\"time_uniform\", method=\"multistep\", order=2, lower_order_final=True)\n\n        return x.to(device), None"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/plms.py",
    "content": "\"\"\"SAMPLING ONLY.\"\"\"\n\nimport torch\nimport numpy as np\nfrom tqdm import tqdm\nfrom functools import partial\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like\nfrom ToonCrafter.ldm.models.diffusion.sampling_util import norm_thresholding\n\n\nclass PLMSSampler(object):\n    def __init__(self, model, schedule=\"linear\", **kwargs):\n        super().__init__()\n        self.model = model\n        self.ddpm_num_timesteps = model.num_timesteps\n        self.schedule = schedule\n\n    def register_buffer(self, name, attr):\n        if type(attr) == torch.Tensor:\n            if attr.device != torch.device(\"cuda\"):\n                attr = attr.to(torch.device(\"cuda\"))\n        setattr(self, name, attr)\n\n    def make_schedule(self, ddim_num_steps, ddim_discretize=\"uniform\", ddim_eta=0., verbose=True):\n        if ddim_eta != 0:\n            raise ValueError('ddim_eta must be 0 for PLMS')\n        self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,\n                                                  num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)\n        alphas_cumprod = self.model.alphas_cumprod\n        assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'\n        to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)\n\n        self.register_buffer('betas', to_torch(self.model.betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))\n\n        # ddim sampling parameters\n        ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),\n                                                                                   ddim_timesteps=self.ddim_timesteps,\n                                                                                   eta=ddim_eta,verbose=verbose)\n        self.register_buffer('ddim_sigmas', ddim_sigmas)\n        self.register_buffer('ddim_alphas', ddim_alphas)\n        self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)\n        self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))\n        sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(\n            (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (\n                        1 - self.alphas_cumprod / self.alphas_cumprod_prev))\n        self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)\n\n    @torch.no_grad()\n    def sample(self,\n               S,\n               batch_size,\n               shape,\n               conditioning=None,\n               callback=None,\n               normals_sequence=None,\n               img_callback=None,\n               quantize_x0=False,\n               eta=0.,\n               mask=None,\n               x0=None,\n               temperature=1.,\n               noise_dropout=0.,\n               score_corrector=None,\n               corrector_kwargs=None,\n               verbose=True,\n               x_T=None,\n               log_every_t=100,\n               unconditional_guidance_scale=1.,\n               unconditional_conditioning=None,\n               # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...\n               dynamic_threshold=None,\n               **kwargs\n               ):\n        if conditioning is not None:\n            if isinstance(conditioning, dict):\n                cbs = conditioning[list(conditioning.keys())[0]].shape[0]\n                if cbs != batch_size:\n                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n            else:\n                if conditioning.shape[0] != batch_size:\n                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n\n        self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)\n        # sampling\n        C, H, W = shape\n        size = (batch_size, C, H, W)\n        print(f'Data shape for PLMS sampling is {size}')\n\n        samples, intermediates = self.plms_sampling(conditioning, size,\n                                                    callback=callback,\n                                                    img_callback=img_callback,\n                                                    quantize_denoised=quantize_x0,\n                                                    mask=mask, x0=x0,\n                                                    ddim_use_original_steps=False,\n                                                    noise_dropout=noise_dropout,\n                                                    temperature=temperature,\n                                                    score_corrector=score_corrector,\n                                                    corrector_kwargs=corrector_kwargs,\n                                                    x_T=x_T,\n                                                    log_every_t=log_every_t,\n                                                    unconditional_guidance_scale=unconditional_guidance_scale,\n                                                    unconditional_conditioning=unconditional_conditioning,\n                                                    dynamic_threshold=dynamic_threshold,\n                                                    )\n        return samples, intermediates\n\n    @torch.no_grad()\n    def plms_sampling(self, cond, shape,\n                      x_T=None, ddim_use_original_steps=False,\n                      callback=None, timesteps=None, quantize_denoised=False,\n                      mask=None, x0=None, img_callback=None, log_every_t=100,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None,\n                      dynamic_threshold=None):\n        device = self.model.betas.device\n        b = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n\n        if timesteps is None:\n            timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps\n        elif timesteps is not None and not ddim_use_original_steps:\n            subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1\n            timesteps = self.ddim_timesteps[:subset_end]\n\n        intermediates = {'x_inter': [img], 'pred_x0': [img]}\n        time_range = list(reversed(range(0,timesteps))) if ddim_use_original_steps else np.flip(timesteps)\n        total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]\n        print(f\"Running PLMS Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='PLMS Sampler', total=total_steps)\n        old_eps = []\n\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((b,), step, device=device, dtype=torch.long)\n            ts_next = torch.full((b,), time_range[min(i + 1, len(time_range) - 1)], device=device, dtype=torch.long)\n\n            if mask is not None:\n                assert x0 is not None\n                img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass?\n                img = img_orig * mask + (1. - mask) * img\n\n            outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,\n                                      quantize_denoised=quantize_denoised, temperature=temperature,\n                                      noise_dropout=noise_dropout, score_corrector=score_corrector,\n                                      corrector_kwargs=corrector_kwargs,\n                                      unconditional_guidance_scale=unconditional_guidance_scale,\n                                      unconditional_conditioning=unconditional_conditioning,\n                                      old_eps=old_eps, t_next=ts_next,\n                                      dynamic_threshold=dynamic_threshold)\n            img, pred_x0, e_t = outs\n            old_eps.append(e_t)\n            if len(old_eps) >= 4:\n                old_eps.pop(0)\n            if callback: callback(i)\n            if img_callback: img_callback(pred_x0, i)\n\n            if index % log_every_t == 0 or index == total_steps - 1:\n                intermediates['x_inter'].append(img)\n                intermediates['pred_x0'].append(pred_x0)\n\n        return img, intermediates\n\n    @torch.no_grad()\n    def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None,\n                      dynamic_threshold=None):\n        b, *_, device = *x.shape, x.device\n\n        def get_model_output(x, t):\n            if unconditional_conditioning is None or unconditional_guidance_scale == 1.:\n                e_t = self.model.apply_model(x, t, c)\n            else:\n                x_in = torch.cat([x] * 2)\n                t_in = torch.cat([t] * 2)\n                c_in = torch.cat([unconditional_conditioning, c])\n                e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)\n                e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)\n\n            if score_corrector is not None:\n                assert self.model.parameterization == \"eps\"\n                e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)\n\n            return e_t\n\n        alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas\n        alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev\n        sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas\n        sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n\n        def get_x_prev_and_pred_x0(e_t, index):\n            # select parameters corresponding to the currently considered timestep\n            a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)\n            a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)\n            sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)\n            sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)\n\n            # current prediction for x_0\n            pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()\n            if quantize_denoised:\n                pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)\n            if dynamic_threshold is not None:\n                pred_x0 = norm_thresholding(pred_x0, dynamic_threshold)\n            # direction pointing to x_t\n            dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t\n            noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature\n            if noise_dropout > 0.:\n                noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n            x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise\n            return x_prev, pred_x0\n\n        e_t = get_model_output(x, t)\n        if len(old_eps) == 0:\n            # Pseudo Improved Euler (2nd order)\n            x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index)\n            e_t_next = get_model_output(x_prev, t_next)\n            e_t_prime = (e_t + e_t_next) / 2\n        elif len(old_eps) == 1:\n            # 2nd order Pseudo Linear Multistep (Adams-Bashforth)\n            e_t_prime = (3 * e_t - old_eps[-1]) / 2\n        elif len(old_eps) == 2:\n            # 3nd order Pseudo Linear Multistep (Adams-Bashforth)\n            e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12\n        elif len(old_eps) >= 3:\n            # 4nd order Pseudo Linear Multistep (Adams-Bashforth)\n            e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24\n\n        x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index)\n\n        return x_prev, pred_x0, e_t\n"
  },
  {
    "path": "ToonCrafter/ldm/models/diffusion/sampling_util.py",
    "content": "import torch\nimport numpy as np\n\n\ndef append_dims(x, target_dims):\n    \"\"\"Appends dimensions to the end of a tensor until it has target_dims dimensions.\n    From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py\"\"\"\n    dims_to_append = target_dims - x.ndim\n    if dims_to_append < 0:\n        raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')\n    return x[(...,) + (None,) * dims_to_append]\n\n\ndef norm_thresholding(x0, value):\n    s = append_dims(x0.pow(2).flatten(1).mean(1).sqrt().clamp(min=value), x0.ndim)\n    return x0 * (value / s)\n\n\ndef spatial_norm_thresholding(x0, value):\n    # b c h w\n    s = x0.pow(2).mean(1, keepdim=True).sqrt().clamp(min=value)\n    return x0 * (value / s)"
  },
  {
    "path": "ToonCrafter/ldm/modules/attention.py",
    "content": "from inspect import isfunction\nimport math\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn, einsum\nfrom einops import rearrange, repeat\nfrom typing import Optional, Any\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import checkpoint\n\n\ntry:\n    import xformers\n    import xformers.ops\n    XFORMERS_IS_AVAILBLE = True\nexcept:\n    XFORMERS_IS_AVAILBLE = False\n\n# CrossAttn precision handling\nimport os\n_ATTN_PRECISION = os.environ.get(\"ATTN_PRECISION\", \"fp32\")\n\ndef exists(val):\n    return val is not None\n\n\ndef uniq(arr):\n    return{el: True for el in arr}.keys()\n\n\ndef default(val, d):\n    if exists(val):\n        return val\n    return d() if isfunction(d) else d\n\n\ndef max_neg_value(t):\n    return -torch.finfo(t.dtype).max\n\n\ndef init_(tensor):\n    dim = tensor.shape[-1]\n    std = 1 / math.sqrt(dim)\n    tensor.uniform_(-std, std)\n    return tensor\n\n\n# feedforward\nclass GEGLU(nn.Module):\n    def __init__(self, dim_in, dim_out):\n        super().__init__()\n        self.proj = nn.Linear(dim_in, dim_out * 2)\n\n    def forward(self, x):\n        x, gate = self.proj(x).chunk(2, dim=-1)\n        return x * F.gelu(gate)\n\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):\n        super().__init__()\n        inner_dim = int(dim * mult)\n        dim_out = default(dim_out, dim)\n        project_in = nn.Sequential(\n            nn.Linear(dim, inner_dim),\n            nn.GELU()\n        ) if not glu else GEGLU(dim, inner_dim)\n\n        self.net = nn.Sequential(\n            project_in,\n            nn.Dropout(dropout),\n            nn.Linear(inner_dim, dim_out)\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef Normalize(in_channels):\n    return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)\n\n\nclass SpatialSelfAttention(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.k = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.v = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.proj_out = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=1,\n                                        stride=1,\n                                        padding=0)\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        b,c,h,w = q.shape\n        q = rearrange(q, 'b c h w -> b (h w) c')\n        k = rearrange(k, 'b c h w -> b c (h w)')\n        w_ = torch.einsum('bij,bjk->bik', q, k)\n\n        w_ = w_ * (int(c)**(-0.5))\n        w_ = torch.nn.functional.softmax(w_, dim=2)\n\n        # attend to values\n        v = rearrange(v, 'b c h w -> b c (h w)')\n        w_ = rearrange(w_, 'b i j -> b j i')\n        h_ = torch.einsum('bij,bjk->bik', v, w_)\n        h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)\n        h_ = self.proj_out(h_)\n\n        return x+h_\n\n\nclass CrossAttention(nn.Module):\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):\n        super().__init__()\n        inner_dim = dim_head * heads\n        context_dim = default(context_dim, query_dim)\n\n        self.scale = dim_head ** -0.5\n        self.heads = heads\n\n        self.to_q = nn.Linear(query_dim, inner_dim, bias=False)\n        self.to_k = nn.Linear(context_dim, inner_dim, bias=False)\n        self.to_v = nn.Linear(context_dim, inner_dim, bias=False)\n\n        self.to_out = nn.Sequential(\n            nn.Linear(inner_dim, query_dim),\n            nn.Dropout(dropout)\n        )\n\n    def forward(self, x, context=None, mask=None):\n        h = self.heads\n\n        q = self.to_q(x)\n        context = default(context, x)\n        k = self.to_k(context)\n        v = self.to_v(context)\n\n        q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))\n\n        # force cast to fp32 to avoid overflowing\n        if _ATTN_PRECISION ==\"fp32\":\n            with torch.autocast(enabled=False, device_type = 'cuda'):\n                q, k = q.float(), k.float()\n                sim = einsum('b i d, b j d -> b i j', q, k) * self.scale\n        else:\n            sim = einsum('b i d, b j d -> b i j', q, k) * self.scale\n        \n        del q, k\n    \n        if exists(mask):\n            mask = rearrange(mask, 'b ... -> b (...)')\n            max_neg_value = -torch.finfo(sim.dtype).max\n            mask = repeat(mask, 'b j -> (b h) () j', h=h)\n            sim.masked_fill_(~mask, max_neg_value)\n\n        # attention, what we cannot get enough of\n        sim = sim.softmax(dim=-1)\n\n        out = einsum('b i j, b j d -> b i d', sim, v)\n        out = rearrange(out, '(b h) n d -> b n (h d)', h=h)\n        return self.to_out(out)\n\n\nclass MemoryEfficientCrossAttention(nn.Module):\n    # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0):\n        super().__init__()\n        print(f\"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using \"\n              f\"{heads} heads.\")\n        inner_dim = dim_head * heads\n        context_dim = default(context_dim, query_dim)\n\n        self.heads = heads\n        self.dim_head = dim_head\n\n        self.to_q = nn.Linear(query_dim, inner_dim, bias=False)\n        self.to_k = nn.Linear(context_dim, inner_dim, bias=False)\n        self.to_v = nn.Linear(context_dim, inner_dim, bias=False)\n\n        self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout))\n        self.attention_op: Optional[Any] = None\n\n    def forward(self, x, context=None, mask=None):\n        q = self.to_q(x)\n        context = default(context, x)\n        k = self.to_k(context)\n        v = self.to_v(context)\n\n        b, _, _ = q.shape\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, t.shape[1], self.heads, self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * self.heads, t.shape[1], self.dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n\n        # actually compute the attention, what we cannot get enough of\n        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)\n\n        if exists(mask):\n            raise NotImplementedError\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, self.heads, out.shape[1], self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, out.shape[1], self.heads * self.dim_head)\n        )\n        return self.to_out(out)\n\n\nclass BasicTransformerBlock(nn.Module):\n    ATTENTION_MODES = {\n        \"softmax\": CrossAttention,  # vanilla attention\n        \"softmax-xformers\": MemoryEfficientCrossAttention\n    }\n    def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True,\n                 disable_self_attn=False):\n        super().__init__()\n        attn_mode = \"softmax-xformers\" if XFORMERS_IS_AVAILBLE else \"softmax\"\n        assert attn_mode in self.ATTENTION_MODES\n        attn_cls = self.ATTENTION_MODES[attn_mode]\n        self.disable_self_attn = disable_self_attn\n        self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout,\n                              context_dim=context_dim if self.disable_self_attn else None)  # is a self-attention if not self.disable_self_attn\n        self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)\n        self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim,\n                              heads=n_heads, dim_head=d_head, dropout=dropout)  # is self-attn if context is none\n        self.norm1 = nn.LayerNorm(dim)\n        self.norm2 = nn.LayerNorm(dim)\n        self.norm3 = nn.LayerNorm(dim)\n        self.checkpoint = checkpoint\n\n    def forward(self, x, context=None):\n        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)\n\n    def _forward(self, x, context=None):\n        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x\n        x = self.attn2(self.norm2(x), context=context) + x\n        x = self.ff(self.norm3(x)) + x\n        return x\n\n\nclass SpatialTransformer(nn.Module):\n    \"\"\"\n    Transformer block for image-like data.\n    First, project the input (aka embedding)\n    and reshape to b, t, d.\n    Then apply standard transformer action.\n    Finally, reshape to image\n    NEW: use_linear for more efficiency instead of the 1x1 convs\n    \"\"\"\n    def __init__(self, in_channels, n_heads, d_head,\n                 depth=1, dropout=0., context_dim=None,\n                 disable_self_attn=False, use_linear=False,\n                 use_checkpoint=True):\n        super().__init__()\n        if exists(context_dim) and not isinstance(context_dim, list):\n            context_dim = [context_dim]\n        self.in_channels = in_channels\n        inner_dim = n_heads * d_head\n        self.norm = Normalize(in_channels)\n        if not use_linear:\n            self.proj_in = nn.Conv2d(in_channels,\n                                     inner_dim,\n                                     kernel_size=1,\n                                     stride=1,\n                                     padding=0)\n        else:\n            self.proj_in = nn.Linear(in_channels, inner_dim)\n\n        self.transformer_blocks = nn.ModuleList(\n            [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d],\n                                   disable_self_attn=disable_self_attn, checkpoint=use_checkpoint)\n                for d in range(depth)]\n        )\n        if not use_linear:\n            self.proj_out = zero_module(nn.Conv2d(inner_dim,\n                                                  in_channels,\n                                                  kernel_size=1,\n                                                  stride=1,\n                                                  padding=0))\n        else:\n            self.proj_out = zero_module(nn.Linear(in_channels, inner_dim))\n        self.use_linear = use_linear\n\n    def forward(self, x, context=None):\n        # note: if no context is given, cross-attention defaults to self-attention\n        if not isinstance(context, list):\n            context = [context]\n        b, c, h, w = x.shape\n        x_in = x\n        x = self.norm(x)\n        if not self.use_linear:\n            x = self.proj_in(x)\n        x = rearrange(x, 'b c h w -> b (h w) c').contiguous()\n        if self.use_linear:\n            x = self.proj_in(x)\n        for i, block in enumerate(self.transformer_blocks):\n            x = block(x, context=context[i])\n        if self.use_linear:\n            x = self.proj_out(x)\n        x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous()\n        if not self.use_linear:\n            x = self.proj_out(x)\n        return x + x_in\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/diffusionmodules/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/modules/diffusionmodules/model.py",
    "content": "# pytorch_diffusion + derived encoder decoder\nimport math\nimport torch\nimport torch.nn as nn\nimport numpy as np\nfrom einops import rearrange\nfrom typing import Optional, Any\n\nfrom ToonCrafter.ldm.modules.attention import MemoryEfficientCrossAttention\n\ntry:\n    import xformers\n    import xformers.ops\n    XFORMERS_IS_AVAILBLE = True\nexcept:\n    XFORMERS_IS_AVAILBLE = False\n    print(\"No module 'xformers'. Proceeding without it.\")\n\n\ndef get_timestep_embedding(timesteps, embedding_dim):\n    \"\"\"\n    This matches the implementation in Denoising Diffusion Probabilistic Models:\n    From Fairseq.\n    Build sinusoidal embeddings.\n    This matches the implementation in tensor2tensor, but differs slightly\n    from the description in Section 3.5 of \"Attention Is All You Need\".\n    \"\"\"\n    assert len(timesteps.shape) == 1\n\n    half_dim = embedding_dim // 2\n    emb = math.log(10000) / (half_dim - 1)\n    emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)\n    emb = emb.to(device=timesteps.device)\n    emb = timesteps.float()[:, None] * emb[None, :]\n    emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)\n    if embedding_dim % 2 == 1:  # zero pad\n        emb = torch.nn.functional.pad(emb, (0,1,0,0))\n    return emb\n\n\ndef nonlinearity(x):\n    # swish\n    return x*torch.sigmoid(x)\n\n\ndef Normalize(in_channels, num_groups=32):\n    return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)\n\n\nclass Upsample(nn.Module):\n    def __init__(self, in_channels, with_conv):\n        super().__init__()\n        self.with_conv = with_conv\n        if self.with_conv:\n            self.conv = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode=\"nearest\")\n        if self.with_conv:\n            x = self.conv(x)\n        return x\n\n\nclass Downsample(nn.Module):\n    def __init__(self, in_channels, with_conv):\n        super().__init__()\n        self.with_conv = with_conv\n        if self.with_conv:\n            # no asymmetric padding in torch conv, must do it ourselves\n            self.conv = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=3,\n                                        stride=2,\n                                        padding=0)\n\n    def forward(self, x):\n        if self.with_conv:\n            pad = (0,1,0,1)\n            x = torch.nn.functional.pad(x, pad, mode=\"constant\", value=0)\n            x = self.conv(x)\n        else:\n            x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)\n        return x\n\n\nclass ResnetBlock(nn.Module):\n    def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,\n                 dropout, temb_channels=512):\n        super().__init__()\n        self.in_channels = in_channels\n        out_channels = in_channels if out_channels is None else out_channels\n        self.out_channels = out_channels\n        self.use_conv_shortcut = conv_shortcut\n\n        self.norm1 = Normalize(in_channels)\n        self.conv1 = torch.nn.Conv2d(in_channels,\n                                     out_channels,\n                                     kernel_size=3,\n                                     stride=1,\n                                     padding=1)\n        if temb_channels > 0:\n            self.temb_proj = torch.nn.Linear(temb_channels,\n                                             out_channels)\n        self.norm2 = Normalize(out_channels)\n        self.dropout = torch.nn.Dropout(dropout)\n        self.conv2 = torch.nn.Conv2d(out_channels,\n                                     out_channels,\n                                     kernel_size=3,\n                                     stride=1,\n                                     padding=1)\n        if self.in_channels != self.out_channels:\n            if self.use_conv_shortcut:\n                self.conv_shortcut = torch.nn.Conv2d(in_channels,\n                                                     out_channels,\n                                                     kernel_size=3,\n                                                     stride=1,\n                                                     padding=1)\n            else:\n                self.nin_shortcut = torch.nn.Conv2d(in_channels,\n                                                    out_channels,\n                                                    kernel_size=1,\n                                                    stride=1,\n                                                    padding=0)\n\n    def forward(self, x, temb):\n        h = x\n        h = self.norm1(h)\n        h = nonlinearity(h)\n        h = self.conv1(h)\n\n        if temb is not None:\n            h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None]\n\n        h = self.norm2(h)\n        h = nonlinearity(h)\n        h = self.dropout(h)\n        h = self.conv2(h)\n\n        if self.in_channels != self.out_channels:\n            if self.use_conv_shortcut:\n                x = self.conv_shortcut(x)\n            else:\n                x = self.nin_shortcut(x)\n\n        return x+h\n\n\nclass AttnBlock(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.k = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.v = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.proj_out = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=1,\n                                        stride=1,\n                                        padding=0)\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        b,c,h,w = q.shape\n        q = q.reshape(b,c,h*w)\n        q = q.permute(0,2,1)   # b,hw,c\n        k = k.reshape(b,c,h*w) # b,c,hw\n        w_ = torch.bmm(q,k)     # b,hw,hw    w[b,i,j]=sum_c q[b,i,c]k[b,c,j]\n        w_ = w_ * (int(c)**(-0.5))\n        w_ = torch.nn.functional.softmax(w_, dim=2)\n\n        # attend to values\n        v = v.reshape(b,c,h*w)\n        w_ = w_.permute(0,2,1)   # b,hw,hw (first hw of k, second of q)\n        h_ = torch.bmm(v,w_)     # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]\n        h_ = h_.reshape(b,c,h,w)\n\n        h_ = self.proj_out(h_)\n\n        return x+h_\n\nclass MemoryEfficientAttnBlock(nn.Module):\n    \"\"\"\n        Uses xformers efficient implementation,\n        see https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223\n        Note: this is a single-head self-attention operation\n    \"\"\"\n    #\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.k = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.v = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.proj_out = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=1,\n                                        stride=1,\n                                        padding=0)\n        self.attention_op: Optional[Any] = None\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        B, C, H, W = q.shape\n        q, k, v = map(lambda x: rearrange(x, 'b c h w -> b (h w) c'), (q, k, v))\n\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(B, t.shape[1], 1, C)\n            .permute(0, 2, 1, 3)\n            .reshape(B * 1, t.shape[1], C)\n            .contiguous(),\n            (q, k, v),\n        )\n        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)\n\n        out = (\n            out.unsqueeze(0)\n            .reshape(B, 1, out.shape[1], C)\n            .permute(0, 2, 1, 3)\n            .reshape(B, out.shape[1], C)\n        )\n        out = rearrange(out, 'b (h w) c -> b c h w', b=B, h=H, w=W, c=C)\n        out = self.proj_out(out)\n        return x+out\n\n\nclass MemoryEfficientCrossAttentionWrapper(MemoryEfficientCrossAttention):\n    def forward(self, x, context=None, mask=None):\n        b, c, h, w = x.shape\n        x = rearrange(x, 'b c h w -> b (h w) c')\n        out = super().forward(x, context=context, mask=mask)\n        out = rearrange(out, 'b (h w) c -> b c h w', h=h, w=w, c=c)\n        return x + out\n\n\ndef make_attn(in_channels, attn_type=\"vanilla\", attn_kwargs=None):\n    assert attn_type in [\"vanilla\", \"vanilla-xformers\", \"memory-efficient-cross-attn\", \"linear\", \"none\"], f'attn_type {attn_type} unknown'\n    if XFORMERS_IS_AVAILBLE and attn_type == \"vanilla\":\n        attn_type = \"vanilla-xformers\"\n    print(f\"making attention of type '{attn_type}' with {in_channels} in_channels\")\n    if attn_type == \"vanilla\":\n        assert attn_kwargs is None\n        return AttnBlock(in_channels)\n    elif attn_type == \"vanilla-xformers\":\n        print(f\"building MemoryEfficientAttnBlock with {in_channels} in_channels...\")\n        return MemoryEfficientAttnBlock(in_channels)\n    elif type == \"memory-efficient-cross-attn\":\n        attn_kwargs[\"query_dim\"] = in_channels\n        return MemoryEfficientCrossAttentionWrapper(**attn_kwargs)\n    elif attn_type == \"none\":\n        return nn.Identity(in_channels)\n    else:\n        raise NotImplementedError()\n\n\nclass Model(nn.Module):\n    def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,\n                 resolution, use_timestep=True, use_linear_attn=False, attn_type=\"vanilla\"):\n        super().__init__()\n        if use_linear_attn: attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = self.ch*4\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n\n        self.use_timestep = use_timestep\n        if self.use_timestep:\n            # timestep embedding\n            self.temb = nn.Module()\n            self.temb.dense = nn.ModuleList([\n                torch.nn.Linear(self.ch,\n                                self.temb_ch),\n                torch.nn.Linear(self.temb_ch,\n                                self.temb_ch),\n            ])\n\n        # downsampling\n        self.conv_in = torch.nn.Conv2d(in_channels,\n                                       self.ch,\n                                       kernel_size=3,\n                                       stride=1,\n                                       padding=1)\n\n        curr_res = resolution\n        in_ch_mult = (1,)+tuple(ch_mult)\n        self.down = nn.ModuleList()\n        for i_level in range(self.num_resolutions):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_in = ch*in_ch_mult[i_level]\n            block_out = ch*ch_mult[i_level]\n            for i_block in range(self.num_res_blocks):\n                block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            down = nn.Module()\n            down.block = block\n            down.attn = attn\n            if i_level != self.num_resolutions-1:\n                down.downsample = Downsample(block_in, resamp_with_conv)\n                curr_res = curr_res // 2\n            self.down.append(down)\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n        self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)\n        self.mid.block_2 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n\n        # upsampling\n        self.up = nn.ModuleList()\n        for i_level in reversed(range(self.num_resolutions)):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_out = ch*ch_mult[i_level]\n            skip_in = ch*ch_mult[i_level]\n            for i_block in range(self.num_res_blocks+1):\n                if i_block == self.num_res_blocks:\n                    skip_in = ch*in_ch_mult[i_level]\n                block.append(ResnetBlock(in_channels=block_in+skip_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            up = nn.Module()\n            up.block = block\n            up.attn = attn\n            if i_level != 0:\n                up.upsample = Upsample(block_in, resamp_with_conv)\n                curr_res = curr_res * 2\n            self.up.insert(0, up) # prepend to get consistent order\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        out_ch,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x, t=None, context=None):\n        #assert x.shape[2] == x.shape[3] == self.resolution\n        if context is not None:\n            # assume aligned context, cat along channel axis\n            x = torch.cat((x, context), dim=1)\n        if self.use_timestep:\n            # timestep embedding\n            assert t is not None\n            temb = get_timestep_embedding(t, self.ch)\n            temb = self.temb.dense[0](temb)\n            temb = nonlinearity(temb)\n            temb = self.temb.dense[1](temb)\n        else:\n            temb = None\n\n        # downsampling\n        hs = [self.conv_in(x)]\n        for i_level in range(self.num_resolutions):\n            for i_block in range(self.num_res_blocks):\n                h = self.down[i_level].block[i_block](hs[-1], temb)\n                if len(self.down[i_level].attn) > 0:\n                    h = self.down[i_level].attn[i_block](h)\n                hs.append(h)\n            if i_level != self.num_resolutions-1:\n                hs.append(self.down[i_level].downsample(hs[-1]))\n\n        # middle\n        h = hs[-1]\n        h = self.mid.block_1(h, temb)\n        h = self.mid.attn_1(h)\n        h = self.mid.block_2(h, temb)\n\n        # upsampling\n        for i_level in reversed(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks+1):\n                h = self.up[i_level].block[i_block](\n                    torch.cat([h, hs.pop()], dim=1), temb)\n                if len(self.up[i_level].attn) > 0:\n                    h = self.up[i_level].attn[i_block](h)\n            if i_level != 0:\n                h = self.up[i_level].upsample(h)\n\n        # end\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        return h\n\n    def get_last_layer(self):\n        return self.conv_out.weight\n\n\nclass Encoder(nn.Module):\n    def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,\n                 resolution, z_channels, double_z=True, use_linear_attn=False, attn_type=\"vanilla\",\n                 **ignore_kwargs):\n        super().__init__()\n        if use_linear_attn: attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n\n        # downsampling\n        self.conv_in = torch.nn.Conv2d(in_channels,\n                                       self.ch,\n                                       kernel_size=3,\n                                       stride=1,\n                                       padding=1)\n\n        curr_res = resolution\n        in_ch_mult = (1,)+tuple(ch_mult)\n        self.in_ch_mult = in_ch_mult\n        self.down = nn.ModuleList()\n        for i_level in range(self.num_resolutions):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_in = ch*in_ch_mult[i_level]\n            block_out = ch*ch_mult[i_level]\n            for i_block in range(self.num_res_blocks):\n                block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            down = nn.Module()\n            down.block = block\n            down.attn = attn\n            if i_level != self.num_resolutions-1:\n                down.downsample = Downsample(block_in, resamp_with_conv)\n                curr_res = curr_res // 2\n            self.down.append(down)\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n        self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)\n        self.mid.block_2 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        2*z_channels if double_z else z_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        # timestep embedding\n        temb = None\n\n        # downsampling\n        hs = [self.conv_in(x)]\n        for i_level in range(self.num_resolutions):\n            for i_block in range(self.num_res_blocks):\n                h = self.down[i_level].block[i_block](hs[-1], temb)\n                if len(self.down[i_level].attn) > 0:\n                    h = self.down[i_level].attn[i_block](h)\n                hs.append(h)\n            if i_level != self.num_resolutions-1:\n                hs.append(self.down[i_level].downsample(hs[-1]))\n\n        # middle\n        h = hs[-1]\n        h = self.mid.block_1(h, temb)\n        h = self.mid.attn_1(h)\n        h = self.mid.block_2(h, temb)\n\n        # end\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        return h\n\n\nclass Decoder(nn.Module):\n    def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,\n                 resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,\n                 attn_type=\"vanilla\", **ignorekwargs):\n        super().__init__()\n        if use_linear_attn: attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n        self.give_pre_end = give_pre_end\n        self.tanh_out = tanh_out\n\n        # compute in_ch_mult, block_in and curr_res at lowest res\n        in_ch_mult = (1,)+tuple(ch_mult)\n        block_in = ch*ch_mult[self.num_resolutions-1]\n        curr_res = resolution // 2**(self.num_resolutions-1)\n        self.z_shape = (1,z_channels,curr_res,curr_res)\n        print(\"Working with z of shape {} = {} dimensions.\".format(\n            self.z_shape, np.prod(self.z_shape)))\n\n        # z to block_in\n        self.conv_in = torch.nn.Conv2d(z_channels,\n                                       block_in,\n                                       kernel_size=3,\n                                       stride=1,\n                                       padding=1)\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n        self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)\n        self.mid.block_2 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n\n        # upsampling\n        self.up = nn.ModuleList()\n        for i_level in reversed(range(self.num_resolutions)):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_out = ch*ch_mult[i_level]\n            for i_block in range(self.num_res_blocks+1):\n                block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            up = nn.Module()\n            up.block = block\n            up.attn = attn\n            if i_level != 0:\n                up.upsample = Upsample(block_in, resamp_with_conv)\n                curr_res = curr_res * 2\n            self.up.insert(0, up) # prepend to get consistent order\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        out_ch,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, z):\n        #assert z.shape[1:] == self.z_shape[1:]\n        self.last_z_shape = z.shape\n\n        # timestep embedding\n        temb = None\n\n        # z to block_in\n        h = self.conv_in(z)\n\n        # middle\n        h = self.mid.block_1(h, temb)\n        h = self.mid.attn_1(h)\n        h = self.mid.block_2(h, temb)\n\n        # upsampling\n        for i_level in reversed(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks+1):\n                h = self.up[i_level].block[i_block](h, temb)\n                if len(self.up[i_level].attn) > 0:\n                    h = self.up[i_level].attn[i_block](h)\n            if i_level != 0:\n                h = self.up[i_level].upsample(h)\n\n        # end\n        if self.give_pre_end:\n            return h\n\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        if self.tanh_out:\n            h = torch.tanh(h)\n        return h\n\n\nclass SimpleDecoder(nn.Module):\n    def __init__(self, in_channels, out_channels, *args, **kwargs):\n        super().__init__()\n        self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),\n                                     ResnetBlock(in_channels=in_channels,\n                                                 out_channels=2 * in_channels,\n                                                 temb_channels=0, dropout=0.0),\n                                     ResnetBlock(in_channels=2 * in_channels,\n                                                out_channels=4 * in_channels,\n                                                temb_channels=0, dropout=0.0),\n                                     ResnetBlock(in_channels=4 * in_channels,\n                                                out_channels=2 * in_channels,\n                                                temb_channels=0, dropout=0.0),\n                                     nn.Conv2d(2*in_channels, in_channels, 1),\n                                     Upsample(in_channels, with_conv=True)])\n        # end\n        self.norm_out = Normalize(in_channels)\n        self.conv_out = torch.nn.Conv2d(in_channels,\n                                        out_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        for i, layer in enumerate(self.model):\n            if i in [1,2,3]:\n                x = layer(x, None)\n            else:\n                x = layer(x)\n\n        h = self.norm_out(x)\n        h = nonlinearity(h)\n        x = self.conv_out(h)\n        return x\n\n\nclass UpsampleDecoder(nn.Module):\n    def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,\n                 ch_mult=(2,2), dropout=0.0):\n        super().__init__()\n        # upsampling\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        block_in = in_channels\n        curr_res = resolution // 2 ** (self.num_resolutions - 1)\n        self.res_blocks = nn.ModuleList()\n        self.upsample_blocks = nn.ModuleList()\n        for i_level in range(self.num_resolutions):\n            res_block = []\n            block_out = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks + 1):\n                res_block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n            self.res_blocks.append(nn.ModuleList(res_block))\n            if i_level != self.num_resolutions - 1:\n                self.upsample_blocks.append(Upsample(block_in, True))\n                curr_res = curr_res * 2\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        out_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        # upsampling\n        h = x\n        for k, i_level in enumerate(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks + 1):\n                h = self.res_blocks[i_level][i_block](h, None)\n            if i_level != self.num_resolutions - 1:\n                h = self.upsample_blocks[k](h)\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        return h\n\n\nclass LatentRescaler(nn.Module):\n    def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2):\n        super().__init__()\n        # residual block, interpolate, residual block\n        self.factor = factor\n        self.conv_in = nn.Conv2d(in_channels,\n                                 mid_channels,\n                                 kernel_size=3,\n                                 stride=1,\n                                 padding=1)\n        self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,\n                                                     out_channels=mid_channels,\n                                                     temb_channels=0,\n                                                     dropout=0.0) for _ in range(depth)])\n        self.attn = AttnBlock(mid_channels)\n        self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,\n                                                     out_channels=mid_channels,\n                                                     temb_channels=0,\n                                                     dropout=0.0) for _ in range(depth)])\n\n        self.conv_out = nn.Conv2d(mid_channels,\n                                  out_channels,\n                                  kernel_size=1,\n                                  )\n\n    def forward(self, x):\n        x = self.conv_in(x)\n        for block in self.res_block1:\n            x = block(x, None)\n        x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor))))\n        x = self.attn(x)\n        for block in self.res_block2:\n            x = block(x, None)\n        x = self.conv_out(x)\n        return x\n\n\nclass MergedRescaleEncoder(nn.Module):\n    def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True,\n                 ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1):\n        super().__init__()\n        intermediate_chn = ch * ch_mult[-1]\n        self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult,\n                               z_channels=intermediate_chn, double_z=False, resolution=resolution,\n                               attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv,\n                               out_ch=None)\n        self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn,\n                                       mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth)\n\n    def forward(self, x):\n        x = self.encoder(x)\n        x = self.rescaler(x)\n        return x\n\n\nclass MergedRescaleDecoder(nn.Module):\n    def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8),\n                 dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1):\n        super().__init__()\n        tmp_chn = z_channels*ch_mult[-1]\n        self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout,\n                               resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks,\n                               ch_mult=ch_mult, resolution=resolution, ch=ch)\n        self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn,\n                                       out_channels=tmp_chn, depth=rescale_module_depth)\n\n    def forward(self, x):\n        x = self.rescaler(x)\n        x = self.decoder(x)\n        return x\n\n\nclass Upsampler(nn.Module):\n    def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2):\n        super().__init__()\n        assert out_size >= in_size\n        num_blocks = int(np.log2(out_size//in_size))+1\n        factor_up = 1.+ (out_size % in_size)\n        print(f\"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}\")\n        self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels,\n                                       out_channels=in_channels)\n        self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2,\n                               attn_resolutions=[], in_channels=None, ch=in_channels,\n                               ch_mult=[ch_mult for _ in range(num_blocks)])\n\n    def forward(self, x):\n        x = self.rescaler(x)\n        x = self.decoder(x)\n        return x\n\n\nclass Resize(nn.Module):\n    def __init__(self, in_channels=None, learned=False, mode=\"bilinear\"):\n        super().__init__()\n        self.with_conv = learned\n        self.mode = mode\n        if self.with_conv:\n            print(f\"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode\")\n            raise NotImplementedError()\n            assert in_channels is not None\n            # no asymmetric padding in torch conv, must do it ourselves\n            self.conv = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=4,\n                                        stride=2,\n                                        padding=1)\n\n    def forward(self, x, scale_factor=1.0):\n        if scale_factor==1.0:\n            return x\n        else:\n            x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor)\n        return x\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/diffusionmodules/openaimodel.py",
    "content": "from abc import abstractmethod\nimport math\n\nimport numpy as np\nimport torch as th\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import (\n    checkpoint,\n    conv_nd,\n    linear,\n    avg_pool_nd,\n    zero_module,\n    normalization,\n    timestep_embedding,\n)\nfrom ToonCrafter.ldm.modules.attention import SpatialTransformer\nfrom ToonCrafter.ldm.util import exists\n\n\n# dummy replace\ndef convert_module_to_f16(x):\n    pass\n\ndef convert_module_to_f32(x):\n    pass\n\n\n## go\nclass AttentionPool2d(nn.Module):\n    \"\"\"\n    Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py\n    \"\"\"\n\n    def __init__(\n        self,\n        spacial_dim: int,\n        embed_dim: int,\n        num_heads_channels: int,\n        output_dim: int = None,\n    ):\n        super().__init__()\n        self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5)\n        self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)\n        self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)\n        self.num_heads = embed_dim // num_heads_channels\n        self.attention = QKVAttention(self.num_heads)\n\n    def forward(self, x):\n        b, c, *_spatial = x.shape\n        x = x.reshape(b, c, -1)  # NC(HW)\n        x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1)  # NC(HW+1)\n        x = x + self.positional_embedding[None, :, :].to(x.dtype)  # NC(HW+1)\n        x = self.qkv_proj(x)\n        x = self.attention(x)\n        x = self.c_proj(x)\n        return x[:, :, 0]\n\n\nclass TimestepBlock(nn.Module):\n    \"\"\"\n    Any module where forward() takes timestep embeddings as a second argument.\n    \"\"\"\n\n    @abstractmethod\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the module to `x` given `emb` timestep embeddings.\n        \"\"\"\n\n\nclass TimestepEmbedSequential(nn.Sequential, TimestepBlock):\n    \"\"\"\n    A sequential module that passes timestep embeddings to the children that\n    support it as an extra input.\n    \"\"\"\n\n    def forward(self, x, emb, context=None, check=False):\n        for layer in self:\n            if isinstance(layer, TimestepBlock):\n                x = layer(x, emb)\n            elif isinstance(layer, SpatialTransformer):\n                x = layer(x, context)\n            else:\n                x = layer(x)\n        return x\n\n\nclass Upsample(nn.Module):\n    \"\"\"\n    An upsampling layer with an optional convolution.\n    :param channels: channels in the inputs and outputs.\n    :param use_conv: a bool determining if a convolution is applied.\n    :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then\n                 upsampling occurs in the inner-two dimensions.\n    \"\"\"\n\n    def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):\n        super().__init__()\n        self.channels = channels\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.dims = dims\n        if use_conv:\n            self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding)\n\n    def forward(self, x):\n        assert x.shape[1] == self.channels\n        if self.dims == 3:\n            x = F.interpolate(\n                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode=\"nearest\"\n            )\n        else:\n            x = F.interpolate(x, scale_factor=2, mode=\"nearest\")\n        if self.use_conv:\n            x = self.conv(x)\n        return x\n\nclass TransposedUpsample(nn.Module):\n    'Learned 2x upsampling without padding'\n    def __init__(self, channels, out_channels=None, ks=5):\n        super().__init__()\n        self.channels = channels\n        self.out_channels = out_channels or channels\n\n        self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2)\n\n    def forward(self,x):\n        return self.up(x)\n\n\nclass Downsample(nn.Module):\n    \"\"\"\n    A downsampling layer with an optional convolution.\n    :param channels: channels in the inputs and outputs.\n    :param use_conv: a bool determining if a convolution is applied.\n    :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then\n                 downsampling occurs in the inner-two dimensions.\n    \"\"\"\n\n    def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1):\n        super().__init__()\n        self.channels = channels\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.dims = dims\n        stride = 2 if dims != 3 else (1, 2, 2)\n        if use_conv:\n            self.op = conv_nd(\n                dims, self.channels, self.out_channels, 3, stride=stride, padding=padding\n            )\n        else:\n            assert self.channels == self.out_channels\n            self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)\n\n    def forward(self, x):\n        assert x.shape[1] == self.channels\n        return self.op(x)\n\n\nclass ResBlock(TimestepBlock):\n    \"\"\"\n    A residual block that can optionally change the number of channels.\n    :param channels: the number of input channels.\n    :param emb_channels: the number of timestep embedding channels.\n    :param dropout: the rate of dropout.\n    :param out_channels: if specified, the number of out channels.\n    :param use_conv: if True and out_channels is specified, use a spatial\n        convolution instead of a smaller 1x1 convolution to change the\n        channels in the skip connection.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param use_checkpoint: if True, use gradient checkpointing on this module.\n    :param up: if True, use this block for upsampling.\n    :param down: if True, use this block for downsampling.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels,\n        emb_channels,\n        dropout,\n        out_channels=None,\n        use_conv=False,\n        use_scale_shift_norm=False,\n        dims=2,\n        use_checkpoint=False,\n        up=False,\n        down=False,\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.dropout = dropout\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n\n        self.in_layers = nn.Sequential(\n            normalization(channels),\n            nn.SiLU(),\n            conv_nd(dims, channels, self.out_channels, 3, padding=1),\n        )\n\n        self.updown = up or down\n\n        if up:\n            self.h_upd = Upsample(channels, False, dims)\n            self.x_upd = Upsample(channels, False, dims)\n        elif down:\n            self.h_upd = Downsample(channels, False, dims)\n            self.x_upd = Downsample(channels, False, dims)\n        else:\n            self.h_upd = self.x_upd = nn.Identity()\n\n        self.emb_layers = nn.Sequential(\n            nn.SiLU(),\n            linear(\n                emb_channels,\n                2 * self.out_channels if use_scale_shift_norm else self.out_channels,\n            ),\n        )\n        self.out_layers = nn.Sequential(\n            normalization(self.out_channels),\n            nn.SiLU(),\n            nn.Dropout(p=dropout),\n            zero_module(\n                conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)\n            ),\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = conv_nd(\n                dims, channels, self.out_channels, 3, padding=1\n            )\n        else:\n            self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)\n\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        return checkpoint(\n            self._forward, (x, emb), self.parameters(), self.use_checkpoint\n        )\n\n\n    def _forward(self, x, emb):\n        if self.updown:\n            in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]\n            h = in_rest(x)\n            h = self.h_upd(h)\n            x = self.x_upd(x)\n            h = in_conv(h)\n        else:\n            h = self.in_layers(x)\n        emb_out = self.emb_layers(emb).type(h.dtype)\n        while len(emb_out.shape) < len(h.shape):\n            emb_out = emb_out[..., None]\n        if self.use_scale_shift_norm:\n            out_norm, out_rest = self.out_layers[0], self.out_layers[1:]\n            scale, shift = th.chunk(emb_out, 2, dim=1)\n            h = out_norm(h) * (1 + scale) + shift\n            h = out_rest(h)\n        else:\n            h = h + emb_out\n            h = self.out_layers(h)\n        return self.skip_connection(x) + h\n\n\nclass AttentionBlock(nn.Module):\n    \"\"\"\n    An attention block that allows spatial positions to attend to each other.\n    Originally ported from here, but adapted to the N-d case.\n    https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels,\n        num_heads=1,\n        num_head_channels=-1,\n        use_checkpoint=False,\n        use_new_attention_order=False,\n    ):\n        super().__init__()\n        self.channels = channels\n        if num_head_channels == -1:\n            self.num_heads = num_heads\n        else:\n            assert (\n                channels % num_head_channels == 0\n            ), f\"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}\"\n            self.num_heads = channels // num_head_channels\n        self.use_checkpoint = use_checkpoint\n        self.norm = normalization(channels)\n        self.qkv = conv_nd(1, channels, channels * 3, 1)\n        if use_new_attention_order:\n            # split qkv before split heads\n            self.attention = QKVAttention(self.num_heads)\n        else:\n            # split heads before split qkv\n            self.attention = QKVAttentionLegacy(self.num_heads)\n\n        self.proj_out = zero_module(conv_nd(1, channels, channels, 1))\n\n    def forward(self, x):\n        return checkpoint(self._forward, (x,), self.parameters(), True)   # TODO: check checkpoint usage, is True # TODO: fix the .half call!!!\n        #return pt_checkpoint(self._forward, x)  # pytorch\n\n    def _forward(self, x):\n        b, c, *spatial = x.shape\n        x = x.reshape(b, c, -1)\n        qkv = self.qkv(self.norm(x))\n        h = self.attention(qkv)\n        h = self.proj_out(h)\n        return (x + h).reshape(b, c, *spatial)\n\n\ndef count_flops_attn(model, _x, y):\n    \"\"\"\n    A counter for the `thop` package to count the operations in an\n    attention operation.\n    Meant to be used like:\n        macs, params = thop.profile(\n            model,\n            inputs=(inputs, timestamps),\n            custom_ops={QKVAttention: QKVAttention.count_flops},\n        )\n    \"\"\"\n    b, c, *spatial = y[0].shape\n    num_spatial = int(np.prod(spatial))\n    # We perform two matmuls with the same number of ops.\n    # The first computes the weight matrix, the second computes\n    # the combination of the value vectors.\n    matmul_ops = 2 * b * (num_spatial ** 2) * c\n    model.total_ops += th.DoubleTensor([matmul_ops])\n\n\nclass QKVAttentionLegacy(nn.Module):\n    \"\"\"\n    A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping\n    \"\"\"\n\n    def __init__(self, n_heads):\n        super().__init__()\n        self.n_heads = n_heads\n\n    def forward(self, qkv):\n        \"\"\"\n        Apply QKV attention.\n        :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.\n        :return: an [N x (H * C) x T] tensor after attention.\n        \"\"\"\n        bs, width, length = qkv.shape\n        assert width % (3 * self.n_heads) == 0\n        ch = width // (3 * self.n_heads)\n        q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)\n        scale = 1 / math.sqrt(math.sqrt(ch))\n        weight = th.einsum(\n            \"bct,bcs->bts\", q * scale, k * scale\n        )  # More stable with f16 than dividing afterwards\n        weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)\n        a = th.einsum(\"bts,bcs->bct\", weight, v)\n        return a.reshape(bs, -1, length)\n\n    @staticmethod\n    def count_flops(model, _x, y):\n        return count_flops_attn(model, _x, y)\n\n\nclass QKVAttention(nn.Module):\n    \"\"\"\n    A module which performs QKV attention and splits in a different order.\n    \"\"\"\n\n    def __init__(self, n_heads):\n        super().__init__()\n        self.n_heads = n_heads\n\n    def forward(self, qkv):\n        \"\"\"\n        Apply QKV attention.\n        :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.\n        :return: an [N x (H * C) x T] tensor after attention.\n        \"\"\"\n        bs, width, length = qkv.shape\n        assert width % (3 * self.n_heads) == 0\n        ch = width // (3 * self.n_heads)\n        q, k, v = qkv.chunk(3, dim=1)\n        scale = 1 / math.sqrt(math.sqrt(ch))\n        weight = th.einsum(\n            \"bct,bcs->bts\",\n            (q * scale).view(bs * self.n_heads, ch, length),\n            (k * scale).view(bs * self.n_heads, ch, length),\n        )  # More stable with f16 than dividing afterwards\n        weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)\n        a = th.einsum(\"bts,bcs->bct\", weight, v.reshape(bs * self.n_heads, ch, length))\n        return a.reshape(bs, -1, length)\n\n    @staticmethod\n    def count_flops(model, _x, y):\n        return count_flops_attn(model, _x, y)\n\n\nclass UNetModel(nn.Module):\n    \"\"\"\n    The full UNet model with attention and timestep embedding.\n    :param in_channels: channels in the input Tensor.\n    :param model_channels: base channel count for the model.\n    :param out_channels: channels in the output Tensor.\n    :param num_res_blocks: number of residual blocks per downsample.\n    :param attention_resolutions: a collection of downsample rates at which\n        attention will take place. May be a set, list, or tuple.\n        For example, if this contains 4, then at 4x downsampling, attention\n        will be used.\n    :param dropout: the dropout probability.\n    :param channel_mult: channel multiplier for each level of the UNet.\n    :param conv_resample: if True, use learned convolutions for upsampling and\n        downsampling.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param num_classes: if specified (as an int), then this model will be\n        class-conditional with `num_classes` classes.\n    :param use_checkpoint: use gradient checkpointing to reduce memory usage.\n    :param num_heads: the number of attention heads in each attention layer.\n    :param num_heads_channels: if specified, ignore num_heads and instead use\n                               a fixed channel width per attention head.\n    :param num_heads_upsample: works with num_heads to set a different number\n                               of heads for upsampling. Deprecated.\n    :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.\n    :param resblock_updown: use residual blocks for up/downsampling.\n    :param use_new_attention_order: use a different attention pattern for potentially\n                                    increased efficiency.\n    \"\"\"\n\n    def __init__(\n        self,\n        image_size,\n        in_channels,\n        model_channels,\n        out_channels,\n        num_res_blocks,\n        attention_resolutions,\n        dropout=0,\n        channel_mult=(1, 2, 4, 8),\n        conv_resample=True,\n        dims=2,\n        num_classes=None,\n        use_checkpoint=False,\n        use_fp16=False,\n        num_heads=-1,\n        num_head_channels=-1,\n        num_heads_upsample=-1,\n        use_scale_shift_norm=False,\n        resblock_updown=False,\n        use_new_attention_order=False,\n        use_spatial_transformer=False,    # custom transformer support\n        transformer_depth=1,              # custom transformer support\n        context_dim=None,                 # custom transformer support\n        n_embed=None,                     # custom support for prediction of discrete ids into codebook of first stage vq model\n        legacy=True,\n        disable_self_attentions=None,\n        num_attention_blocks=None,\n        disable_middle_self_attn=False,\n        use_linear_in_transformer=False,\n    ):\n        super().__init__()\n        if use_spatial_transformer:\n            assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'\n\n        if context_dim is not None:\n            assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'\n            from omegaconf.listconfig import ListConfig\n            if type(context_dim) == ListConfig:\n                context_dim = list(context_dim)\n\n        if num_heads_upsample == -1:\n            num_heads_upsample = num_heads\n\n        if num_heads == -1:\n            assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'\n\n        if num_head_channels == -1:\n            assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'\n\n        self.image_size = image_size\n        self.in_channels = in_channels\n        self.model_channels = model_channels\n        self.out_channels = out_channels\n        if isinstance(num_res_blocks, int):\n            self.num_res_blocks = len(channel_mult) * [num_res_blocks]\n        else:\n            if len(num_res_blocks) != len(channel_mult):\n                raise ValueError(\"provide num_res_blocks either as an int (globally constant) or \"\n                                 \"as a list/tuple (per-level) with the same length as channel_mult\")\n            self.num_res_blocks = num_res_blocks\n        if disable_self_attentions is not None:\n            # should be a list of booleans, indicating whether to disable self-attention in TransformerBlocks or not\n            assert len(disable_self_attentions) == len(channel_mult)\n        if num_attention_blocks is not None:\n            assert len(num_attention_blocks) == len(self.num_res_blocks)\n            assert all(map(lambda i: self.num_res_blocks[i] >= num_attention_blocks[i], range(len(num_attention_blocks))))\n            print(f\"Constructor of UNetModel received num_attention_blocks={num_attention_blocks}. \"\n                  f\"This option has LESS priority than attention_resolutions {attention_resolutions}, \"\n                  f\"i.e., in cases where num_attention_blocks[i] > 0 but 2**i not in attention_resolutions, \"\n                  f\"attention will still not be set.\")\n\n        self.attention_resolutions = attention_resolutions\n        self.dropout = dropout\n        self.channel_mult = channel_mult\n        self.conv_resample = conv_resample\n        self.num_classes = num_classes\n        self.use_checkpoint = use_checkpoint\n        self.dtype = th.float16 if use_fp16 else th.float32\n        self.num_heads = num_heads\n        self.num_head_channels = num_head_channels\n        self.num_heads_upsample = num_heads_upsample\n        self.predict_codebook_ids = n_embed is not None\n\n        time_embed_dim = model_channels * 4\n        self.time_embed = nn.Sequential(\n            linear(model_channels, time_embed_dim),\n            nn.SiLU(),\n            linear(time_embed_dim, time_embed_dim),\n        )\n\n        if self.num_classes is not None:\n            if isinstance(self.num_classes, int):\n                self.label_emb = nn.Embedding(num_classes, time_embed_dim)\n            elif self.num_classes == \"continuous\":\n                print(\"setting up linear c_adm embedding layer\")\n                self.label_emb = nn.Linear(1, time_embed_dim)\n            else:\n                raise ValueError()\n\n        self.input_blocks = nn.ModuleList(\n            [\n                TimestepEmbedSequential(\n                    conv_nd(dims, in_channels, model_channels, 3, padding=1)\n                )\n            ]\n        )\n        self._feature_size = model_channels\n        input_block_chans = [model_channels]\n        ch = model_channels\n        ds = 1\n        for level, mult in enumerate(channel_mult):\n            for nr in range(self.num_res_blocks[level]):\n                layers = [\n                    ResBlock(\n                        ch,\n                        time_embed_dim,\n                        dropout,\n                        out_channels=mult * model_channels,\n                        dims=dims,\n                        use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm,\n                    )\n                ]\n                ch = mult * model_channels\n                if ds in attention_resolutions:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    if legacy:\n                        #num_heads = 1\n                        dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n                    if exists(disable_self_attentions):\n                        disabled_sa = disable_self_attentions[level]\n                    else:\n                        disabled_sa = False\n\n                    if not exists(num_attention_blocks) or nr < num_attention_blocks[level]:\n                        layers.append(\n                            AttentionBlock(\n                                ch,\n                                use_checkpoint=use_checkpoint,\n                                num_heads=num_heads,\n                                num_head_channels=dim_head,\n                                use_new_attention_order=use_new_attention_order,\n                            ) if not use_spatial_transformer else SpatialTransformer(\n                                ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,\n                                disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer,\n                                use_checkpoint=use_checkpoint\n                            )\n                        )\n                self.input_blocks.append(TimestepEmbedSequential(*layers))\n                self._feature_size += ch\n                input_block_chans.append(ch)\n            if level != len(channel_mult) - 1:\n                out_ch = ch\n                self.input_blocks.append(\n                    TimestepEmbedSequential(\n                        ResBlock(\n                            ch,\n                            time_embed_dim,\n                            dropout,\n                            out_channels=out_ch,\n                            dims=dims,\n                            use_checkpoint=use_checkpoint,\n                            use_scale_shift_norm=use_scale_shift_norm,\n                            down=True,\n                        )\n                        if resblock_updown\n                        else Downsample(\n                            ch, conv_resample, dims=dims, out_channels=out_ch\n                        )\n                    )\n                )\n                ch = out_ch\n                input_block_chans.append(ch)\n                ds *= 2\n                self._feature_size += ch\n\n        if num_head_channels == -1:\n            dim_head = ch // num_heads\n        else:\n            num_heads = ch // num_head_channels\n            dim_head = num_head_channels\n        if legacy:\n            #num_heads = 1\n            dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n        self.middle_block = TimestepEmbedSequential(\n            ResBlock(\n                ch,\n                time_embed_dim,\n                dropout,\n                dims=dims,\n                use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm,\n            ),\n            AttentionBlock(\n                ch,\n                use_checkpoint=use_checkpoint,\n                num_heads=num_heads,\n                num_head_channels=dim_head,\n                use_new_attention_order=use_new_attention_order,\n            ) if not use_spatial_transformer else SpatialTransformer(  # always uses a self-attn\n                            ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,\n                            disable_self_attn=disable_middle_self_attn, use_linear=use_linear_in_transformer,\n                            use_checkpoint=use_checkpoint\n                        ),\n            ResBlock(\n                ch,\n                time_embed_dim,\n                dropout,\n                dims=dims,\n                use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm,\n            ),\n        )\n        self._feature_size += ch\n\n        self.output_blocks = nn.ModuleList([])\n        for level, mult in list(enumerate(channel_mult))[::-1]:\n            for i in range(self.num_res_blocks[level] + 1):\n                ich = input_block_chans.pop()\n                layers = [\n                    ResBlock(\n                        ch + ich,\n                        time_embed_dim,\n                        dropout,\n                        out_channels=model_channels * mult,\n                        dims=dims,\n                        use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm,\n                    )\n                ]\n                ch = model_channels * mult\n                if ds in attention_resolutions:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    if legacy:\n                        #num_heads = 1\n                        dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n                    if exists(disable_self_attentions):\n                        disabled_sa = disable_self_attentions[level]\n                    else:\n                        disabled_sa = False\n\n                    if not exists(num_attention_blocks) or i < num_attention_blocks[level]:\n                        layers.append(\n                            AttentionBlock(\n                                ch,\n                                use_checkpoint=use_checkpoint,\n                                num_heads=num_heads_upsample,\n                                num_head_channels=dim_head,\n                                use_new_attention_order=use_new_attention_order,\n                            ) if not use_spatial_transformer else SpatialTransformer(\n                                ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,\n                                disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer,\n                                use_checkpoint=use_checkpoint\n                            )\n                        )\n                if level and i == self.num_res_blocks[level]:\n                    out_ch = ch\n                    layers.append(\n                        ResBlock(\n                            ch,\n                            time_embed_dim,\n                            dropout,\n                            out_channels=out_ch,\n                            dims=dims,\n                            use_checkpoint=use_checkpoint,\n                            use_scale_shift_norm=use_scale_shift_norm,\n                            up=True,\n                        )\n                        if resblock_updown\n                        else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)\n                    )\n                    ds //= 2\n                self.output_blocks.append(TimestepEmbedSequential(*layers))\n                self._feature_size += ch\n\n        self.out = nn.Sequential(\n            normalization(ch),\n            nn.SiLU(),\n            zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),\n        )\n        if self.predict_codebook_ids:\n            self.id_predictor = nn.Sequential(\n            normalization(ch),\n            conv_nd(dims, model_channels, n_embed, 1),\n            #nn.LogSoftmax(dim=1)  # change to cross_entropy and produce non-normalized logits\n        )\n\n    def convert_to_fp16(self):\n        \"\"\"\n        Convert the torso of the model to float16.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f16)\n        self.middle_block.apply(convert_module_to_f16)\n        self.output_blocks.apply(convert_module_to_f16)\n\n    def convert_to_fp32(self):\n        \"\"\"\n        Convert the torso of the model to float32.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f32)\n        self.middle_block.apply(convert_module_to_f32)\n        self.output_blocks.apply(convert_module_to_f32)\n\n    def forward(self, x, timesteps=None, context=None, y=None,**kwargs):\n        \"\"\"\n        Apply the model to an input batch.\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param context: conditioning plugged in via crossattn\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        assert (y is not None) == (\n            self.num_classes is not None\n        ), \"must specify y if and only if the model is class-conditional\"\n        hs = []\n        t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)\n        emb = self.time_embed(t_emb)\n\n        if self.num_classes is not None:\n            assert y.shape[0] == x.shape[0]\n            emb = emb + self.label_emb(y)\n\n        h = x.type(self.dtype)\n        for module in self.input_blocks:\n            h = module(h, emb, context)\n            hs.append(h)\n        h = self.middle_block(h, emb, context)\n        for module in self.output_blocks:\n            h = th.cat([h, hs.pop()], dim=1)\n            h = module(h, emb, context)\n        h = h.type(x.dtype)\n        if self.predict_codebook_ids:\n            return self.id_predictor(h)\n        else:\n            return self.out(h)\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/diffusionmodules/upscaling.py",
    "content": "import torch\nimport torch.nn as nn\nimport numpy as np\nfrom functools import partial\n\nfrom ToonCrafter.ldm.modules.diffusionmodules.util import extract_into_tensor, make_beta_schedule\nfrom ToonCrafter.ldm.util import default\n\n\nclass AbstractLowScaleModel(nn.Module):\n    # for concatenating a downsampled image to the latent representation\n    def __init__(self, noise_schedule_config=None):\n        super(AbstractLowScaleModel, self).__init__()\n        if noise_schedule_config is not None:\n            self.register_schedule(**noise_schedule_config)\n\n    def register_schedule(self, beta_schedule=\"linear\", timesteps=1000,\n                          linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n        betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,\n                                   cosine_s=cosine_s)\n        alphas = 1. - betas\n        alphas_cumprod = np.cumprod(alphas, axis=0)\n        alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])\n\n        timesteps, = betas.shape\n        self.num_timesteps = int(timesteps)\n        self.linear_start = linear_start\n        self.linear_end = linear_end\n        assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'\n\n        to_torch = partial(torch.tensor, dtype=torch.float32)\n\n        self.register_buffer('betas', to_torch(betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))\n\n    def q_sample(self, x_start, t, noise=None):\n        noise = default(noise, lambda: torch.randn_like(x_start))\n        return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)\n\n    def forward(self, x):\n        return x, None\n\n    def decode(self, x):\n        return x\n\n\nclass SimpleImageConcat(AbstractLowScaleModel):\n    # no noise level conditioning\n    def __init__(self):\n        super(SimpleImageConcat, self).__init__(noise_schedule_config=None)\n        self.max_noise_level = 0\n\n    def forward(self, x):\n        # fix to constant noise level\n        return x, torch.zeros(x.shape[0], device=x.device).long()\n\n\nclass ImageConcatWithNoiseAugmentation(AbstractLowScaleModel):\n    def __init__(self, noise_schedule_config, max_noise_level=1000, to_cuda=False):\n        super().__init__(noise_schedule_config=noise_schedule_config)\n        self.max_noise_level = max_noise_level\n\n    def forward(self, x, noise_level=None):\n        if noise_level is None:\n            noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long()\n        else:\n            assert isinstance(noise_level, torch.Tensor)\n        z = self.q_sample(x, noise_level)\n        return z, noise_level\n\n\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/diffusionmodules/util.py",
    "content": "# adopted from\n# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py\n# and\n# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py\n# and\n# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py\n#\n# thanks!\n\n\nimport os\nimport math\nimport torch\nimport torch.nn as nn\nimport numpy as np\nfrom einops import repeat\n\nfrom ToonCrafter.ldm.util import instantiate_from_config\n\n\ndef make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n    if schedule == \"linear\":\n        betas = (\n                torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2\n        )\n\n    elif schedule == \"cosine\":\n        timesteps = (\n                torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s\n        )\n        alphas = timesteps / (1 + cosine_s) * np.pi / 2\n        alphas = torch.cos(alphas).pow(2)\n        alphas = alphas / alphas[0]\n        betas = 1 - alphas[1:] / alphas[:-1]\n        betas = np.clip(betas, a_min=0, a_max=0.999)\n\n    elif schedule == \"sqrt_linear\":\n        betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)\n    elif schedule == \"sqrt\":\n        betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5\n    else:\n        raise ValueError(f\"schedule '{schedule}' unknown.\")\n    return betas.numpy()\n\n\ndef make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):\n    if ddim_discr_method == 'uniform':\n        c = num_ddpm_timesteps // num_ddim_timesteps\n        ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))\n    elif ddim_discr_method == 'quad':\n        ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)\n    else:\n        raise NotImplementedError(f'There is no ddim discretization method called \"{ddim_discr_method}\"')\n\n    # assert ddim_timesteps.shape[0] == num_ddim_timesteps\n    # add one to get the final alpha values right (the ones from first scale to data during sampling)\n    steps_out = ddim_timesteps + 1\n    if verbose:\n        print(f'Selected timesteps for ddim sampler: {steps_out}')\n    return steps_out\n\n\ndef make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):\n    # select alphas for computing the variance schedule\n    alphas = alphacums[ddim_timesteps]\n    alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())\n\n    # according the the formula provided in https://arxiv.org/abs/2010.02502\n    sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))\n    if verbose:\n        print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')\n        print(f'For the chosen value of eta, which is {eta}, '\n              f'this results in the following sigma_t schedule for ddim sampler {sigmas}')\n    return sigmas, alphas, alphas_prev\n\n\ndef betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):\n    \"\"\"\n    Create a beta schedule that discretizes the given alpha_t_bar function,\n    which defines the cumulative product of (1-beta) over time from t = [0,1].\n    :param num_diffusion_timesteps: the number of betas to produce.\n    :param alpha_bar: a lambda that takes an argument t from 0 to 1 and\n                      produces the cumulative product of (1-beta) up to that\n                      part of the diffusion process.\n    :param max_beta: the maximum beta to use; use values lower than 1 to\n                     prevent singularities.\n    \"\"\"\n    betas = []\n    for i in range(num_diffusion_timesteps):\n        t1 = i / num_diffusion_timesteps\n        t2 = (i + 1) / num_diffusion_timesteps\n        betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))\n    return np.array(betas)\n\n\ndef extract_into_tensor(a, t, x_shape):\n    b, *_ = t.shape\n    out = a.gather(-1, t)\n    return out.reshape(b, *((1,) * (len(x_shape) - 1)))\n\n\ndef checkpoint(func, inputs, params, flag):\n    \"\"\"\n    Evaluate a function without caching intermediate activations, allowing for\n    reduced memory at the expense of extra compute in the backward pass.\n    :param func: the function to evaluate.\n    :param inputs: the argument sequence to pass to `func`.\n    :param params: a sequence of parameters `func` depends on but does not\n                   explicitly take as arguments.\n    :param flag: if False, disable gradient checkpointing.\n    \"\"\"\n    if flag:\n        args = tuple(inputs) + tuple(params)\n        return CheckpointFunction.apply(func, len(inputs), *args)\n    else:\n        return func(*inputs)\n\n\nclass CheckpointFunction(torch.autograd.Function):\n    @staticmethod\n    def forward(ctx, run_function, length, *args):\n        ctx.run_function = run_function\n        ctx.input_tensors = list(args[:length])\n        ctx.input_params = list(args[length:])\n        ctx.gpu_autocast_kwargs = {\"enabled\": torch.is_autocast_enabled(),\n                                   \"dtype\": torch.get_autocast_gpu_dtype(),\n                                   \"cache_enabled\": torch.is_autocast_cache_enabled()}\n        with torch.no_grad():\n            output_tensors = ctx.run_function(*ctx.input_tensors)\n        return output_tensors\n\n    @staticmethod\n    def backward(ctx, *output_grads):\n        ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]\n        with torch.enable_grad(), \\\n                torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs):\n            # Fixes a bug where the first op in run_function modifies the\n            # Tensor storage in place, which is not allowed for detach()'d\n            # Tensors.\n            shallow_copies = [x.view_as(x) for x in ctx.input_tensors]\n            output_tensors = ctx.run_function(*shallow_copies)\n        input_grads = torch.autograd.grad(\n            output_tensors,\n            ctx.input_tensors + ctx.input_params,\n            output_grads,\n            allow_unused=True,\n        )\n        del ctx.input_tensors\n        del ctx.input_params\n        del output_tensors\n        return (None, None) + input_grads\n\n\ndef timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):\n    \"\"\"\n    Create sinusoidal timestep embeddings.\n    :param timesteps: a 1-D Tensor of N indices, one per batch element.\n                      These may be fractional.\n    :param dim: the dimension of the output.\n    :param max_period: controls the minimum frequency of the embeddings.\n    :return: an [N x dim] Tensor of positional embeddings.\n    \"\"\"\n    if not repeat_only:\n        half = dim // 2\n        freqs = torch.exp(\n            -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half\n        ).to(device=timesteps.device)\n        args = timesteps[:, None].float() * freqs[None]\n        embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)\n        if dim % 2:\n            embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)\n    else:\n        embedding = repeat(timesteps, 'b -> b d', d=dim)\n    return embedding\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef scale_module(module, scale):\n    \"\"\"\n    Scale the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().mul_(scale)\n    return module\n\n\ndef mean_flat(tensor):\n    \"\"\"\n    Take the mean over all non-batch dimensions.\n    \"\"\"\n    return tensor.mean(dim=list(range(1, len(tensor.shape))))\n\n\ndef normalization(channels):\n    \"\"\"\n    Make a standard normalization layer.\n    :param channels: number of input channels.\n    :return: an nn.Module for normalization.\n    \"\"\"\n    return GroupNorm32(32, channels)\n\n\n# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.\nclass SiLU(nn.Module):\n    def forward(self, x):\n        return x * torch.sigmoid(x)\n\n\nclass GroupNorm32(nn.GroupNorm):\n    def forward(self, x):\n        return super().forward(x.float()).type(x.dtype)\n\ndef conv_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D convolution module.\n    \"\"\"\n    if dims == 1:\n        return nn.Conv1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.Conv2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.Conv3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef linear(*args, **kwargs):\n    \"\"\"\n    Create a linear module.\n    \"\"\"\n    return nn.Linear(*args, **kwargs)\n\n\ndef avg_pool_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D average pooling module.\n    \"\"\"\n    if dims == 1:\n        return nn.AvgPool1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.AvgPool2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.AvgPool3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\nclass HybridConditioner(nn.Module):\n\n    def __init__(self, c_concat_config, c_crossattn_config):\n        super().__init__()\n        self.concat_conditioner = instantiate_from_config(c_concat_config)\n        self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)\n\n    def forward(self, c_concat, c_crossattn):\n        c_concat = self.concat_conditioner(c_concat)\n        c_crossattn = self.crossattn_conditioner(c_crossattn)\n        return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}\n\n\ndef noise_like(shape, device, repeat=False):\n    repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))\n    noise = lambda: torch.randn(shape, device=device)\n    return repeat_noise() if repeat else noise()"
  },
  {
    "path": "ToonCrafter/ldm/modules/distributions/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/modules/distributions/distributions.py",
    "content": "import torch\nimport numpy as np\n\n\nclass AbstractDistribution:\n    def sample(self):\n        raise NotImplementedError()\n\n    def mode(self):\n        raise NotImplementedError()\n\n\nclass DiracDistribution(AbstractDistribution):\n    def __init__(self, value):\n        self.value = value\n\n    def sample(self):\n        return self.value\n\n    def mode(self):\n        return self.value\n\n\nclass DiagonalGaussianDistribution(object):\n    def __init__(self, parameters, deterministic=False):\n        self.parameters = parameters\n        self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)\n        self.logvar = torch.clamp(self.logvar, -30.0, 20.0)\n        self.deterministic = deterministic\n        self.std = torch.exp(0.5 * self.logvar)\n        self.var = torch.exp(self.logvar)\n        if self.deterministic:\n            self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)\n\n    def sample(self):\n        x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)\n        return x\n\n    def kl(self, other=None):\n        if self.deterministic:\n            return torch.Tensor([0.])\n        else:\n            if other is None:\n                return 0.5 * torch.sum(torch.pow(self.mean, 2)\n                                       + self.var - 1.0 - self.logvar,\n                                       dim=[1, 2, 3])\n            else:\n                return 0.5 * torch.sum(\n                    torch.pow(self.mean - other.mean, 2) / other.var\n                    + self.var / other.var - 1.0 - self.logvar + other.logvar,\n                    dim=[1, 2, 3])\n\n    def nll(self, sample, dims=[1,2,3]):\n        if self.deterministic:\n            return torch.Tensor([0.])\n        logtwopi = np.log(2.0 * np.pi)\n        return 0.5 * torch.sum(\n            logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,\n            dim=dims)\n\n    def mode(self):\n        return self.mean\n\n\ndef normal_kl(mean1, logvar1, mean2, logvar2):\n    \"\"\"\n    source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12\n    Compute the KL divergence between two gaussians.\n    Shapes are automatically broadcasted, so batches can be compared to\n    scalars, among other use cases.\n    \"\"\"\n    tensor = None\n    for obj in (mean1, logvar1, mean2, logvar2):\n        if isinstance(obj, torch.Tensor):\n            tensor = obj\n            break\n    assert tensor is not None, \"at least one argument must be a Tensor\"\n\n    # Force variances to be Tensors. Broadcasting helps convert scalars to\n    # Tensors, but it does not work for torch.exp().\n    logvar1, logvar2 = [\n        x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)\n        for x in (logvar1, logvar2)\n    ]\n\n    return 0.5 * (\n        -1.0\n        + logvar2\n        - logvar1\n        + torch.exp(logvar1 - logvar2)\n        + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)\n    )\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/ema.py",
    "content": "import torch\nfrom torch import nn\n\n\nclass LitEma(nn.Module):\n    def __init__(self, model, decay=0.9999, use_num_upates=True):\n        super().__init__()\n        if decay < 0.0 or decay > 1.0:\n            raise ValueError('Decay must be between 0 and 1')\n\n        self.m_name2s_name = {}\n        self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))\n        self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int) if use_num_upates\n        else torch.tensor(-1, dtype=torch.int))\n\n        for name, p in model.named_parameters():\n            if p.requires_grad:\n                # remove as '.'-character is not allowed in buffers\n                s_name = name.replace('.', '')\n                self.m_name2s_name.update({name: s_name})\n                self.register_buffer(s_name, p.clone().detach().data)\n\n        self.collected_params = []\n\n    def reset_num_updates(self):\n        del self.num_updates\n        self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int))\n\n    def forward(self, model):\n        decay = self.decay\n\n        if self.num_updates >= 0:\n            self.num_updates += 1\n            decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates))\n\n        one_minus_decay = 1.0 - decay\n\n        with torch.no_grad():\n            m_param = dict(model.named_parameters())\n            shadow_params = dict(self.named_buffers())\n\n            for key in m_param:\n                if m_param[key].requires_grad:\n                    sname = self.m_name2s_name[key]\n                    shadow_params[sname] = shadow_params[sname].type_as(m_param[key])\n                    shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))\n                else:\n                    assert not key in self.m_name2s_name\n\n    def copy_to(self, model):\n        m_param = dict(model.named_parameters())\n        shadow_params = dict(self.named_buffers())\n        for key in m_param:\n            if m_param[key].requires_grad:\n                m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)\n            else:\n                assert not key in self.m_name2s_name\n\n    def store(self, parameters):\n        \"\"\"\n        Save the current parameters for restoring later.\n        Args:\n          parameters: Iterable of `torch.nn.Parameter`; the parameters to be\n            temporarily stored.\n        \"\"\"\n        self.collected_params = [param.clone() for param in parameters]\n\n    def restore(self, parameters):\n        \"\"\"\n        Restore the parameters stored with the `store` method.\n        Useful to validate the model with EMA parameters without affecting the\n        original optimization process. Store the parameters before the\n        `copy_to` method. After validation (or model saving), use this to\n        restore the former parameters.\n        Args:\n          parameters: Iterable of `torch.nn.Parameter`; the parameters to be\n            updated with the stored parameters.\n        \"\"\"\n        for c_param, param in zip(self.collected_params, parameters):\n            param.data.copy_(c_param.data)\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/encoders/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/modules/encoders/modules.py",
    "content": "import torch\nimport torch.nn as nn\nfrom torch.utils.checkpoint import checkpoint\n\nfrom transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel\n\nimport open_clip\nfrom ToonCrafter.ldm.util import default, count_params\n\n\nclass AbstractEncoder(nn.Module):\n    def __init__(self):\n        super().__init__()\n\n    def encode(self, *args, **kwargs):\n        raise NotImplementedError\n\n\nclass IdentityEncoder(AbstractEncoder):\n\n    def encode(self, x):\n        return x\n\n\nclass ClassEmbedder(nn.Module):\n    def __init__(self, embed_dim, n_classes=1000, key='class', ucg_rate=0.1):\n        super().__init__()\n        self.key = key\n        self.embedding = nn.Embedding(n_classes, embed_dim)\n        self.n_classes = n_classes\n        self.ucg_rate = ucg_rate\n\n    def forward(self, batch, key=None, disable_dropout=False):\n        if key is None:\n            key = self.key\n        # this is for use in crossattn\n        c = batch[key][:, None]\n        if self.ucg_rate > 0. and not disable_dropout:\n            mask = 1. - torch.bernoulli(torch.ones_like(c) * self.ucg_rate)\n            c = mask * c + (1-mask) * torch.ones_like(c)*(self.n_classes-1)\n            c = c.long()\n        c = self.embedding(c)\n        return c\n\n    def get_unconditional_conditioning(self, bs, device=\"cuda\"):\n        uc_class = self.n_classes - 1  # 1000 classes --> 0 ... 999, one extra class for ucg (class 1000)\n        uc = torch.ones((bs,), device=device) * uc_class\n        uc = {self.key: uc}\n        return uc\n\n\ndef disabled_train(self, mode=True):\n    \"\"\"Overwrite model.train with this function to make sure train/eval mode\n    does not change anymore.\"\"\"\n    return self\n\n\nclass FrozenT5Embedder(AbstractEncoder):\n    \"\"\"Uses the T5 transformer encoder for text\"\"\"\n    def __init__(self, version=\"google/t5-v1_1-large\", device=\"cuda\", max_length=77, freeze=True):  # others are google/t5-v1_1-xl and google/t5-v1_1-xxl\n        super().__init__()\n        self.tokenizer = T5Tokenizer.from_pretrained(version)\n        self.transformer = T5EncoderModel.from_pretrained(version)\n        self.device = device\n        self.max_length = max_length   # TODO: typical value?\n        if freeze:\n            self.freeze()\n\n    def freeze(self):\n        self.transformer = self.transformer.eval()\n        #self.train = disabled_train\n        for param in self.parameters():\n            param.requires_grad = False\n\n    def forward(self, text):\n        batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,\n                                        return_overflowing_tokens=False, padding=\"max_length\", return_tensors=\"pt\")\n        tokens = batch_encoding[\"input_ids\"].to(self.device)\n        outputs = self.transformer(input_ids=tokens)\n\n        z = outputs.last_hidden_state\n        return z\n\n    def encode(self, text):\n        return self(text)\n\n\nclass FrozenCLIPEmbedder(AbstractEncoder):\n    \"\"\"Uses the CLIP transformer encoder for text (from huggingface)\"\"\"\n    LAYERS = [\n        \"last\",\n        \"pooled\",\n        \"hidden\"\n    ]\n    def __init__(self, version=\"openai/clip-vit-large-patch14\", device=\"cuda\", max_length=77,\n                 freeze=True, layer=\"last\", layer_idx=None):  # clip-vit-base-patch32\n        super().__init__()\n        assert layer in self.LAYERS\n        self.tokenizer = CLIPTokenizer.from_pretrained(version)\n        self.transformer = CLIPTextModel.from_pretrained(version)\n        self.device = device\n        self.max_length = max_length\n        if freeze:\n            self.freeze()\n        self.layer = layer\n        self.layer_idx = layer_idx\n        if layer == \"hidden\":\n            assert layer_idx is not None\n            assert 0 <= abs(layer_idx) <= 12\n\n    def freeze(self):\n        self.transformer = self.transformer.eval()\n        #self.train = disabled_train\n        for param in self.parameters():\n            param.requires_grad = False\n\n    def forward(self, text):\n        batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,\n                                        return_overflowing_tokens=False, padding=\"max_length\", return_tensors=\"pt\")\n        tokens = batch_encoding[\"input_ids\"].to(self.device)\n        outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer==\"hidden\")\n        if self.layer == \"last\":\n            z = outputs.last_hidden_state\n        elif self.layer == \"pooled\":\n            z = outputs.pooler_output[:, None, :]\n        else:\n            z = outputs.hidden_states[self.layer_idx]\n        return z\n\n    def encode(self, text):\n        return self(text)\n\n\nclass FrozenOpenCLIPEmbedder(AbstractEncoder):\n    \"\"\"\n    Uses the OpenCLIP transformer encoder for text\n    \"\"\"\n    LAYERS = [\n        #\"pooled\",\n        \"last\",\n        \"penultimate\"\n    ]\n    def __init__(self, arch=\"ViT-H-14\", version=\"laion2b_s32b_b79k\", device=\"cuda\", max_length=77,\n                 freeze=True, layer=\"last\"):\n        super().__init__()\n        assert layer in self.LAYERS\n        model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)\n        del model.visual\n        self.model = model\n\n        self.device = device\n        self.max_length = max_length\n        if freeze:\n            self.freeze()\n        self.layer = layer\n        if self.layer == \"last\":\n            self.layer_idx = 0\n        elif self.layer == \"penultimate\":\n            self.layer_idx = 1\n        else:\n            raise NotImplementedError()\n\n    def freeze(self):\n        self.model = self.model.eval()\n        for param in self.parameters():\n            param.requires_grad = False\n\n    def forward(self, text):\n        tokens = open_clip.tokenize(text)\n        z = self.encode_with_transformer(tokens.to(self.device))\n        return z\n\n    def encode_with_transformer(self, text):\n        x = self.model.token_embedding(text)  # [batch_size, n_ctx, d_model]\n        x = x + self.model.positional_embedding\n        x = x.permute(1, 0, 2)  # NLD -> LND\n        x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)\n        x = x.permute(1, 0, 2)  # LND -> NLD\n        x = self.model.ln_final(x)\n        return x\n\n    def text_transformer_forward(self, x: torch.Tensor, attn_mask = None):\n        for i, r in enumerate(self.model.transformer.resblocks):\n            if i == len(self.model.transformer.resblocks) - self.layer_idx:\n                break\n            if self.model.transformer.grad_checkpointing and not torch.jit.is_scripting():\n                x = checkpoint(r, x, attn_mask)\n            else:\n                x = r(x, attn_mask=attn_mask)\n        return x\n\n    def encode(self, text):\n        return self(text)\n\n\nclass FrozenCLIPT5Encoder(AbstractEncoder):\n    def __init__(self, clip_version=\"openai/clip-vit-large-patch14\", t5_version=\"google/t5-v1_1-xl\", device=\"cuda\",\n                 clip_max_length=77, t5_max_length=77):\n        super().__init__()\n        self.clip_encoder = FrozenCLIPEmbedder(clip_version, device, max_length=clip_max_length)\n        self.t5_encoder = FrozenT5Embedder(t5_version, device, max_length=t5_max_length)\n        print(f\"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, \"\n              f\"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params.\")\n\n    def encode(self, text):\n        return self(text)\n\n    def forward(self, text):\n        clip_z = self.clip_encoder.encode(text)\n        t5_z = self.t5_encoder.encode(text)\n        return [clip_z, t5_z]\n\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/image_degradation/__init__.py",
    "content": "from ToonCrafter.ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr\nfrom ToonCrafter.ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/image_degradation/bsrgan.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"\n# --------------------------------------------\n# Super-Resolution\n# --------------------------------------------\n#\n# Kai Zhang (cskaizhang@gmail.com)\n# https://github.com/cszn\n# From 2019/03--2021/08\n# --------------------------------------------\n\"\"\"\n\nimport numpy as np\nimport cv2\nimport torch\n\nfrom functools import partial\nimport random\nfrom scipy import ndimage\nimport scipy\nimport scipy.stats as ss\nfrom scipy.interpolate import interp2d\nfrom scipy.linalg import orth\nimport albumentations\n\nimport ldm.modules.image_degradation.utils_image as util\n\n\ndef modcrop_np(img, sf):\n    '''\n    Args:\n        img: numpy image, WxH or WxHxC\n        sf: scale factor\n    Return:\n        cropped image\n    '''\n    w, h = img.shape[:2]\n    im = np.copy(img)\n    return im[:w - w % sf, :h - h % sf, ...]\n\n\n\"\"\"\n# --------------------------------------------\n# anisotropic Gaussian kernels\n# --------------------------------------------\n\"\"\"\n\n\ndef analytic_kernel(k):\n    \"\"\"Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)\"\"\"\n    k_size = k.shape[0]\n    # Calculate the big kernels size\n    big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))\n    # Loop over the small kernel to fill the big one\n    for r in range(k_size):\n        for c in range(k_size):\n            big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k\n    # Crop the edges of the big kernel to ignore very small values and increase run time of SR\n    crop = k_size // 2\n    cropped_big_k = big_k[crop:-crop, crop:-crop]\n    # Normalize to 1\n    return cropped_big_k / cropped_big_k.sum()\n\n\ndef anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):\n    \"\"\" generate an anisotropic Gaussian kernel\n    Args:\n        ksize : e.g., 15, kernel size\n        theta : [0,  pi], rotation angle range\n        l1    : [0.1,50], scaling of eigenvalues\n        l2    : [0.1,l1], scaling of eigenvalues\n        If l1 = l2, will get an isotropic Gaussian kernel.\n    Returns:\n        k     : kernel\n    \"\"\"\n\n    v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))\n    V = np.array([[v[0], v[1]], [v[1], -v[0]]])\n    D = np.array([[l1, 0], [0, l2]])\n    Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))\n    k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)\n\n    return k\n\n\ndef gm_blur_kernel(mean, cov, size=15):\n    center = size / 2.0 + 0.5\n    k = np.zeros([size, size])\n    for y in range(size):\n        for x in range(size):\n            cy = y - center + 1\n            cx = x - center + 1\n            k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)\n\n    k = k / np.sum(k)\n    return k\n\n\ndef shift_pixel(x, sf, upper_left=True):\n    \"\"\"shift pixel for super-resolution with different scale factors\n    Args:\n        x: WxHxC or WxH\n        sf: scale factor\n        upper_left: shift direction\n    \"\"\"\n    h, w = x.shape[:2]\n    shift = (sf - 1) * 0.5\n    xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)\n    if upper_left:\n        x1 = xv + shift\n        y1 = yv + shift\n    else:\n        x1 = xv - shift\n        y1 = yv - shift\n\n    x1 = np.clip(x1, 0, w - 1)\n    y1 = np.clip(y1, 0, h - 1)\n\n    if x.ndim == 2:\n        x = interp2d(xv, yv, x)(x1, y1)\n    if x.ndim == 3:\n        for i in range(x.shape[-1]):\n            x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)\n\n    return x\n\n\ndef blur(x, k):\n    '''\n    x: image, NxcxHxW\n    k: kernel, Nx1xhxw\n    '''\n    n, c = x.shape[:2]\n    p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2\n    x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')\n    k = k.repeat(1, c, 1, 1)\n    k = k.view(-1, 1, k.shape[2], k.shape[3])\n    x = x.view(1, -1, x.shape[2], x.shape[3])\n    x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)\n    x = x.view(n, c, x.shape[2], x.shape[3])\n\n    return x\n\n\ndef gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):\n    \"\"\"\"\n    # modified version of https://github.com/assafshocher/BlindSR_dataset_generator\n    # Kai Zhang\n    # min_var = 0.175 * sf  # variance of the gaussian kernel will be sampled between min_var and max_var\n    # max_var = 2.5 * sf\n    \"\"\"\n    # Set random eigen-vals (lambdas) and angle (theta) for COV matrix\n    lambda_1 = min_var + np.random.rand() * (max_var - min_var)\n    lambda_2 = min_var + np.random.rand() * (max_var - min_var)\n    theta = np.random.rand() * np.pi  # random theta\n    noise = -noise_level + np.random.rand(*k_size) * noise_level * 2\n\n    # Set COV matrix using Lambdas and Theta\n    LAMBDA = np.diag([lambda_1, lambda_2])\n    Q = np.array([[np.cos(theta), -np.sin(theta)],\n                  [np.sin(theta), np.cos(theta)]])\n    SIGMA = Q @ LAMBDA @ Q.T\n    INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]\n\n    # Set expectation position (shifting kernel for aligned image)\n    MU = k_size // 2 - 0.5 * (scale_factor - 1)  # - 0.5 * (scale_factor - k_size % 2)\n    MU = MU[None, None, :, None]\n\n    # Create meshgrid for Gaussian\n    [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))\n    Z = np.stack([X, Y], 2)[:, :, :, None]\n\n    # Calcualte Gaussian for every pixel of the kernel\n    ZZ = Z - MU\n    ZZ_t = ZZ.transpose(0, 1, 3, 2)\n    raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)\n\n    # shift the kernel so it will be centered\n    # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)\n\n    # Normalize the kernel and return\n    # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)\n    kernel = raw_kernel / np.sum(raw_kernel)\n    return kernel\n\n\ndef fspecial_gaussian(hsize, sigma):\n    hsize = [hsize, hsize]\n    siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]\n    std = sigma\n    [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))\n    arg = -(x * x + y * y) / (2 * std * std)\n    h = np.exp(arg)\n    h[h < scipy.finfo(float).eps * h.max()] = 0\n    sumh = h.sum()\n    if sumh != 0:\n        h = h / sumh\n    return h\n\n\ndef fspecial_laplacian(alpha):\n    alpha = max([0, min([alpha, 1])])\n    h1 = alpha / (alpha + 1)\n    h2 = (1 - alpha) / (alpha + 1)\n    h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]\n    h = np.array(h)\n    return h\n\n\ndef fspecial(filter_type, *args, **kwargs):\n    '''\n    python code from:\n    https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py\n    '''\n    if filter_type == 'gaussian':\n        return fspecial_gaussian(*args, **kwargs)\n    if filter_type == 'laplacian':\n        return fspecial_laplacian(*args, **kwargs)\n\n\n\"\"\"\n# --------------------------------------------\n# degradation models\n# --------------------------------------------\n\"\"\"\n\n\ndef bicubic_degradation(x, sf=3):\n    '''\n    Args:\n        x: HxWxC image, [0, 1]\n        sf: down-scale factor\n    Return:\n        bicubicly downsampled LR image\n    '''\n    x = util.imresize_np(x, scale=1 / sf)\n    return x\n\n\ndef srmd_degradation(x, k, sf=3):\n    ''' blur + bicubic downsampling\n    Args:\n        x: HxWxC image, [0, 1]\n        k: hxw, double\n        sf: down-scale factor\n    Return:\n        downsampled LR image\n    Reference:\n        @inproceedings{zhang2018learning,\n          title={Learning a single convolutional super-resolution network for multiple degradations},\n          author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n          booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n          pages={3262--3271},\n          year={2018}\n        }\n    '''\n    x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')  # 'nearest' | 'mirror'\n    x = bicubic_degradation(x, sf=sf)\n    return x\n\n\ndef dpsr_degradation(x, k, sf=3):\n    ''' bicubic downsampling + blur\n    Args:\n        x: HxWxC image, [0, 1]\n        k: hxw, double\n        sf: down-scale factor\n    Return:\n        downsampled LR image\n    Reference:\n        @inproceedings{zhang2019deep,\n          title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},\n          author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n          booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n          pages={1671--1681},\n          year={2019}\n        }\n    '''\n    x = bicubic_degradation(x, sf=sf)\n    x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')\n    return x\n\n\ndef classical_degradation(x, k, sf=3):\n    ''' blur + downsampling\n    Args:\n        x: HxWxC image, [0, 1]/[0, 255]\n        k: hxw, double\n        sf: down-scale factor\n    Return:\n        downsampled LR image\n    '''\n    x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')\n    # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))\n    st = 0\n    return x[st::sf, st::sf, ...]\n\n\ndef add_sharpening(img, weight=0.5, radius=50, threshold=10):\n    \"\"\"USM sharpening. borrowed from real-ESRGAN\n    Input image: I; Blurry image: B.\n    1. K = I + weight * (I - B)\n    2. Mask = 1 if abs(I - B) > threshold, else: 0\n    3. Blur mask:\n    4. Out = Mask * K + (1 - Mask) * I\n    Args:\n        img (Numpy array): Input image, HWC, BGR; float32, [0, 1].\n        weight (float): Sharp weight. Default: 1.\n        radius (float): Kernel size of Gaussian blur. Default: 50.\n        threshold (int):\n    \"\"\"\n    if radius % 2 == 0:\n        radius += 1\n    blur = cv2.GaussianBlur(img, (radius, radius), 0)\n    residual = img - blur\n    mask = np.abs(residual) * 255 > threshold\n    mask = mask.astype('float32')\n    soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)\n\n    K = img + weight * residual\n    K = np.clip(K, 0, 1)\n    return soft_mask * K + (1 - soft_mask) * img\n\n\ndef add_blur(img, sf=4):\n    wd2 = 4.0 + sf\n    wd = 2.0 + 0.2 * sf\n    if random.random() < 0.5:\n        l1 = wd2 * random.random()\n        l2 = wd2 * random.random()\n        k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)\n    else:\n        k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random())\n    img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')\n\n    return img\n\n\ndef add_resize(img, sf=4):\n    rnum = np.random.rand()\n    if rnum > 0.8:  # up\n        sf1 = random.uniform(1, 2)\n    elif rnum < 0.7:  # down\n        sf1 = random.uniform(0.5 / sf, 1)\n    else:\n        sf1 = 1.0\n    img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))\n    img = np.clip(img, 0.0, 1.0)\n\n    return img\n\n\n# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):\n#     noise_level = random.randint(noise_level1, noise_level2)\n#     rnum = np.random.rand()\n#     if rnum > 0.6:  # add color Gaussian noise\n#         img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)\n#     elif rnum < 0.4:  # add grayscale Gaussian noise\n#         img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)\n#     else:  # add  noise\n#         L = noise_level2 / 255.\n#         D = np.diag(np.random.rand(3))\n#         U = orth(np.random.rand(3, 3))\n#         conv = np.dot(np.dot(np.transpose(U), D), U)\n#         img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)\n#     img = np.clip(img, 0.0, 1.0)\n#     return img\n\ndef add_Gaussian_noise(img, noise_level1=2, noise_level2=25):\n    noise_level = random.randint(noise_level1, noise_level2)\n    rnum = np.random.rand()\n    if rnum > 0.6:  # add color Gaussian noise\n        img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)\n    elif rnum < 0.4:  # add grayscale Gaussian noise\n        img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)\n    else:  # add  noise\n        L = noise_level2 / 255.\n        D = np.diag(np.random.rand(3))\n        U = orth(np.random.rand(3, 3))\n        conv = np.dot(np.dot(np.transpose(U), D), U)\n        img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)\n    img = np.clip(img, 0.0, 1.0)\n    return img\n\n\ndef add_speckle_noise(img, noise_level1=2, noise_level2=25):\n    noise_level = random.randint(noise_level1, noise_level2)\n    img = np.clip(img, 0.0, 1.0)\n    rnum = random.random()\n    if rnum > 0.6:\n        img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)\n    elif rnum < 0.4:\n        img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)\n    else:\n        L = noise_level2 / 255.\n        D = np.diag(np.random.rand(3))\n        U = orth(np.random.rand(3, 3))\n        conv = np.dot(np.dot(np.transpose(U), D), U)\n        img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)\n    img = np.clip(img, 0.0, 1.0)\n    return img\n\n\ndef add_Poisson_noise(img):\n    img = np.clip((img * 255.0).round(), 0, 255) / 255.\n    vals = 10 ** (2 * random.random() + 2.0)  # [2, 4]\n    if random.random() < 0.5:\n        img = np.random.poisson(img * vals).astype(np.float32) / vals\n    else:\n        img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])\n        img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.\n        noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray\n        img += noise_gray[:, :, np.newaxis]\n    img = np.clip(img, 0.0, 1.0)\n    return img\n\n\ndef add_JPEG_noise(img):\n    quality_factor = random.randint(30, 95)\n    img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)\n    result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])\n    img = cv2.imdecode(encimg, 1)\n    img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)\n    return img\n\n\ndef random_crop(lq, hq, sf=4, lq_patchsize=64):\n    h, w = lq.shape[:2]\n    rnd_h = random.randint(0, h - lq_patchsize)\n    rnd_w = random.randint(0, w - lq_patchsize)\n    lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]\n\n    rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)\n    hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]\n    return lq, hq\n\n\ndef degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):\n    \"\"\"\n    This is the degradation model of BSRGAN from the paper\n    \"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution\"\n    ----------\n    img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)\n    sf: scale factor\n    isp_model: camera ISP model\n    Returns\n    -------\n    img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]\n    hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]\n    \"\"\"\n    isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25\n    sf_ori = sf\n\n    h1, w1 = img.shape[:2]\n    img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...]  # mod crop\n    h, w = img.shape[:2]\n\n    if h < lq_patchsize * sf or w < lq_patchsize * sf:\n        raise ValueError(f'img size ({h1}X{w1}) is too small!')\n\n    hq = img.copy()\n\n    if sf == 4 and random.random() < scale2_prob:  # downsample1\n        if np.random.rand() < 0.5:\n            img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),\n                             interpolation=random.choice([1, 2, 3]))\n        else:\n            img = util.imresize_np(img, 1 / 2, True)\n        img = np.clip(img, 0.0, 1.0)\n        sf = 2\n\n    shuffle_order = random.sample(range(7), 7)\n    idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)\n    if idx1 > idx2:  # keep downsample3 last\n        shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]\n\n    for i in shuffle_order:\n\n        if i == 0:\n            img = add_blur(img, sf=sf)\n\n        elif i == 1:\n            img = add_blur(img, sf=sf)\n\n        elif i == 2:\n            a, b = img.shape[1], img.shape[0]\n            # downsample2\n            if random.random() < 0.75:\n                sf1 = random.uniform(1, 2 * sf)\n                img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),\n                                 interpolation=random.choice([1, 2, 3]))\n            else:\n                k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))\n                k_shifted = shift_pixel(k, sf)\n                k_shifted = k_shifted / k_shifted.sum()  # blur with shifted kernel\n                img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')\n                img = img[0::sf, 0::sf, ...]  # nearest downsampling\n            img = np.clip(img, 0.0, 1.0)\n\n        elif i == 3:\n            # downsample3\n            img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))\n            img = np.clip(img, 0.0, 1.0)\n\n        elif i == 4:\n            # add Gaussian noise\n            img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)\n\n        elif i == 5:\n            # add JPEG noise\n            if random.random() < jpeg_prob:\n                img = add_JPEG_noise(img)\n\n        elif i == 6:\n            # add processed camera sensor noise\n            if random.random() < isp_prob and isp_model is not None:\n                with torch.no_grad():\n                    img, hq = isp_model.forward(img.copy(), hq)\n\n    # add final JPEG compression noise\n    img = add_JPEG_noise(img)\n\n    # random crop\n    img, hq = random_crop(img, hq, sf_ori, lq_patchsize)\n\n    return img, hq\n\n\n# todo no isp_model?\ndef degradation_bsrgan_variant(image, sf=4, isp_model=None):\n    \"\"\"\n    This is the degradation model of BSRGAN from the paper\n    \"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution\"\n    ----------\n    sf: scale factor\n    isp_model: camera ISP model\n    Returns\n    -------\n    img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]\n    hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]\n    \"\"\"\n    image = util.uint2single(image)\n    isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25\n    sf_ori = sf\n\n    h1, w1 = image.shape[:2]\n    image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...]  # mod crop\n    h, w = image.shape[:2]\n\n    hq = image.copy()\n\n    if sf == 4 and random.random() < scale2_prob:  # downsample1\n        if np.random.rand() < 0.5:\n            image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),\n                               interpolation=random.choice([1, 2, 3]))\n        else:\n            image = util.imresize_np(image, 1 / 2, True)\n        image = np.clip(image, 0.0, 1.0)\n        sf = 2\n\n    shuffle_order = random.sample(range(7), 7)\n    idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)\n    if idx1 > idx2:  # keep downsample3 last\n        shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]\n\n    for i in shuffle_order:\n\n        if i == 0:\n            image = add_blur(image, sf=sf)\n\n        elif i == 1:\n            image = add_blur(image, sf=sf)\n\n        elif i == 2:\n            a, b = image.shape[1], image.shape[0]\n            # downsample2\n            if random.random() < 0.75:\n                sf1 = random.uniform(1, 2 * sf)\n                image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),\n                                   interpolation=random.choice([1, 2, 3]))\n            else:\n                k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))\n                k_shifted = shift_pixel(k, sf)\n                k_shifted = k_shifted / k_shifted.sum()  # blur with shifted kernel\n                image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')\n                image = image[0::sf, 0::sf, ...]  # nearest downsampling\n            image = np.clip(image, 0.0, 1.0)\n\n        elif i == 3:\n            # downsample3\n            image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))\n            image = np.clip(image, 0.0, 1.0)\n\n        elif i == 4:\n            # add Gaussian noise\n            image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25)\n\n        elif i == 5:\n            # add JPEG noise\n            if random.random() < jpeg_prob:\n                image = add_JPEG_noise(image)\n\n        # elif i == 6:\n        #     # add processed camera sensor noise\n        #     if random.random() < isp_prob and isp_model is not None:\n        #         with torch.no_grad():\n        #             img, hq = isp_model.forward(img.copy(), hq)\n\n    # add final JPEG compression noise\n    image = add_JPEG_noise(image)\n    image = util.single2uint(image)\n    example = {\"image\":image}\n    return example\n\n\n# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc...\ndef degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):\n    \"\"\"\n    This is an extended degradation model by combining\n    the degradation models of BSRGAN and Real-ESRGAN\n    ----------\n    img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)\n    sf: scale factor\n    use_shuffle: the degradation shuffle\n    use_sharp: sharpening the img\n    Returns\n    -------\n    img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]\n    hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]\n    \"\"\"\n\n    h1, w1 = img.shape[:2]\n    img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...]  # mod crop\n    h, w = img.shape[:2]\n\n    if h < lq_patchsize * sf or w < lq_patchsize * sf:\n        raise ValueError(f'img size ({h1}X{w1}) is too small!')\n\n    if use_sharp:\n        img = add_sharpening(img)\n    hq = img.copy()\n\n    if random.random() < shuffle_prob:\n        shuffle_order = random.sample(range(13), 13)\n    else:\n        shuffle_order = list(range(13))\n        # local shuffle for noise, JPEG is always the last one\n        shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))\n        shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))\n\n    poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1\n\n    for i in shuffle_order:\n        if i == 0:\n            img = add_blur(img, sf=sf)\n        elif i == 1:\n            img = add_resize(img, sf=sf)\n        elif i == 2:\n            img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)\n        elif i == 3:\n            if random.random() < poisson_prob:\n                img = add_Poisson_noise(img)\n        elif i == 4:\n            if random.random() < speckle_prob:\n                img = add_speckle_noise(img)\n        elif i == 5:\n            if random.random() < isp_prob and isp_model is not None:\n                with torch.no_grad():\n                    img, hq = isp_model.forward(img.copy(), hq)\n        elif i == 6:\n            img = add_JPEG_noise(img)\n        elif i == 7:\n            img = add_blur(img, sf=sf)\n        elif i == 8:\n            img = add_resize(img, sf=sf)\n        elif i == 9:\n            img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)\n        elif i == 10:\n            if random.random() < poisson_prob:\n                img = add_Poisson_noise(img)\n        elif i == 11:\n            if random.random() < speckle_prob:\n                img = add_speckle_noise(img)\n        elif i == 12:\n            if random.random() < isp_prob and isp_model is not None:\n                with torch.no_grad():\n                    img, hq = isp_model.forward(img.copy(), hq)\n        else:\n            print('check the shuffle!')\n\n    # resize to desired size\n    img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])),\n                     interpolation=random.choice([1, 2, 3]))\n\n    # add final JPEG compression noise\n    img = add_JPEG_noise(img)\n\n    # random crop\n    img, hq = random_crop(img, hq, sf, lq_patchsize)\n\n    return img, hq\n\n\nif __name__ == '__main__':\n\tprint(\"hey\")\n\timg = util.imread_uint('utils/test.png', 3)\n\tprint(img)\n\timg = util.uint2single(img)\n\tprint(img)\n\timg = img[:448, :448]\n\th = img.shape[0] // 4\n\tprint(\"resizing to\", h)\n\tsf = 4\n\tdeg_fn = partial(degradation_bsrgan_variant, sf=sf)\n\tfor i in range(20):\n\t\tprint(i)\n\t\timg_lq = deg_fn(img)\n\t\tprint(img_lq)\n\t\timg_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)[\"image\"]\n\t\tprint(img_lq.shape)\n\t\tprint(\"bicubic\", img_lq_bicubic.shape)\n\t\tprint(img_hq.shape)\n\t\tlq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),\n\t\t                        interpolation=0)\n\t\tlq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),\n\t\t                        interpolation=0)\n\t\timg_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)\n\t\tutil.imsave(img_concat, str(i) + '.png')\n\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/image_degradation/bsrgan_light.py",
    "content": "# -*- coding: utf-8 -*-\nimport numpy as np\nimport cv2\nimport torch\n\nfrom functools import partial\nimport random\nfrom scipy import ndimage\nimport scipy\nimport scipy.stats as ss\nfrom scipy.interpolate import interp2d\nfrom scipy.linalg import orth\nimport albumentations\n\nimport ldm.modules.image_degradation.utils_image as util\n\n\"\"\"\n# --------------------------------------------\n# Super-Resolution\n# --------------------------------------------\n#\n# Kai Zhang (cskaizhang@gmail.com)\n# https://github.com/cszn\n# From 2019/03--2021/08\n# --------------------------------------------\n\"\"\"\n\ndef modcrop_np(img, sf):\n    '''\n    Args:\n        img: numpy image, WxH or WxHxC\n        sf: scale factor\n    Return:\n        cropped image\n    '''\n    w, h = img.shape[:2]\n    im = np.copy(img)\n    return im[:w - w % sf, :h - h % sf, ...]\n\n\n\"\"\"\n# --------------------------------------------\n# anisotropic Gaussian kernels\n# --------------------------------------------\n\"\"\"\n\n\ndef analytic_kernel(k):\n    \"\"\"Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)\"\"\"\n    k_size = k.shape[0]\n    # Calculate the big kernels size\n    big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))\n    # Loop over the small kernel to fill the big one\n    for r in range(k_size):\n        for c in range(k_size):\n            big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k\n    # Crop the edges of the big kernel to ignore very small values and increase run time of SR\n    crop = k_size // 2\n    cropped_big_k = big_k[crop:-crop, crop:-crop]\n    # Normalize to 1\n    return cropped_big_k / cropped_big_k.sum()\n\n\ndef anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):\n    \"\"\" generate an anisotropic Gaussian kernel\n    Args:\n        ksize : e.g., 15, kernel size\n        theta : [0,  pi], rotation angle range\n        l1    : [0.1,50], scaling of eigenvalues\n        l2    : [0.1,l1], scaling of eigenvalues\n        If l1 = l2, will get an isotropic Gaussian kernel.\n    Returns:\n        k     : kernel\n    \"\"\"\n\n    v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))\n    V = np.array([[v[0], v[1]], [v[1], -v[0]]])\n    D = np.array([[l1, 0], [0, l2]])\n    Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))\n    k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)\n\n    return k\n\n\ndef gm_blur_kernel(mean, cov, size=15):\n    center = size / 2.0 + 0.5\n    k = np.zeros([size, size])\n    for y in range(size):\n        for x in range(size):\n            cy = y - center + 1\n            cx = x - center + 1\n            k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)\n\n    k = k / np.sum(k)\n    return k\n\n\ndef shift_pixel(x, sf, upper_left=True):\n    \"\"\"shift pixel for super-resolution with different scale factors\n    Args:\n        x: WxHxC or WxH\n        sf: scale factor\n        upper_left: shift direction\n    \"\"\"\n    h, w = x.shape[:2]\n    shift = (sf - 1) * 0.5\n    xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)\n    if upper_left:\n        x1 = xv + shift\n        y1 = yv + shift\n    else:\n        x1 = xv - shift\n        y1 = yv - shift\n\n    x1 = np.clip(x1, 0, w - 1)\n    y1 = np.clip(y1, 0, h - 1)\n\n    if x.ndim == 2:\n        x = interp2d(xv, yv, x)(x1, y1)\n    if x.ndim == 3:\n        for i in range(x.shape[-1]):\n            x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)\n\n    return x\n\n\ndef blur(x, k):\n    '''\n    x: image, NxcxHxW\n    k: kernel, Nx1xhxw\n    '''\n    n, c = x.shape[:2]\n    p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2\n    x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')\n    k = k.repeat(1, c, 1, 1)\n    k = k.view(-1, 1, k.shape[2], k.shape[3])\n    x = x.view(1, -1, x.shape[2], x.shape[3])\n    x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)\n    x = x.view(n, c, x.shape[2], x.shape[3])\n\n    return x\n\n\ndef gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):\n    \"\"\"\"\n    # modified version of https://github.com/assafshocher/BlindSR_dataset_generator\n    # Kai Zhang\n    # min_var = 0.175 * sf  # variance of the gaussian kernel will be sampled between min_var and max_var\n    # max_var = 2.5 * sf\n    \"\"\"\n    # Set random eigen-vals (lambdas) and angle (theta) for COV matrix\n    lambda_1 = min_var + np.random.rand() * (max_var - min_var)\n    lambda_2 = min_var + np.random.rand() * (max_var - min_var)\n    theta = np.random.rand() * np.pi  # random theta\n    noise = -noise_level + np.random.rand(*k_size) * noise_level * 2\n\n    # Set COV matrix using Lambdas and Theta\n    LAMBDA = np.diag([lambda_1, lambda_2])\n    Q = np.array([[np.cos(theta), -np.sin(theta)],\n                  [np.sin(theta), np.cos(theta)]])\n    SIGMA = Q @ LAMBDA @ Q.T\n    INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]\n\n    # Set expectation position (shifting kernel for aligned image)\n    MU = k_size // 2 - 0.5 * (scale_factor - 1)  # - 0.5 * (scale_factor - k_size % 2)\n    MU = MU[None, None, :, None]\n\n    # Create meshgrid for Gaussian\n    [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))\n    Z = np.stack([X, Y], 2)[:, :, :, None]\n\n    # Calcualte Gaussian for every pixel of the kernel\n    ZZ = Z - MU\n    ZZ_t = ZZ.transpose(0, 1, 3, 2)\n    raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)\n\n    # shift the kernel so it will be centered\n    # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)\n\n    # Normalize the kernel and return\n    # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)\n    kernel = raw_kernel / np.sum(raw_kernel)\n    return kernel\n\n\ndef fspecial_gaussian(hsize, sigma):\n    hsize = [hsize, hsize]\n    siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]\n    std = sigma\n    [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))\n    arg = -(x * x + y * y) / (2 * std * std)\n    h = np.exp(arg)\n    h[h < scipy.finfo(float).eps * h.max()] = 0\n    sumh = h.sum()\n    if sumh != 0:\n        h = h / sumh\n    return h\n\n\ndef fspecial_laplacian(alpha):\n    alpha = max([0, min([alpha, 1])])\n    h1 = alpha / (alpha + 1)\n    h2 = (1 - alpha) / (alpha + 1)\n    h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]\n    h = np.array(h)\n    return h\n\n\ndef fspecial(filter_type, *args, **kwargs):\n    '''\n    python code from:\n    https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py\n    '''\n    if filter_type == 'gaussian':\n        return fspecial_gaussian(*args, **kwargs)\n    if filter_type == 'laplacian':\n        return fspecial_laplacian(*args, **kwargs)\n\n\n\"\"\"\n# --------------------------------------------\n# degradation models\n# --------------------------------------------\n\"\"\"\n\n\ndef bicubic_degradation(x, sf=3):\n    '''\n    Args:\n        x: HxWxC image, [0, 1]\n        sf: down-scale factor\n    Return:\n        bicubicly downsampled LR image\n    '''\n    x = util.imresize_np(x, scale=1 / sf)\n    return x\n\n\ndef srmd_degradation(x, k, sf=3):\n    ''' blur + bicubic downsampling\n    Args:\n        x: HxWxC image, [0, 1]\n        k: hxw, double\n        sf: down-scale factor\n    Return:\n        downsampled LR image\n    Reference:\n        @inproceedings{zhang2018learning,\n          title={Learning a single convolutional super-resolution network for multiple degradations},\n          author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n          booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n          pages={3262--3271},\n          year={2018}\n        }\n    '''\n    x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')  # 'nearest' | 'mirror'\n    x = bicubic_degradation(x, sf=sf)\n    return x\n\n\ndef dpsr_degradation(x, k, sf=3):\n    ''' bicubic downsampling + blur\n    Args:\n        x: HxWxC image, [0, 1]\n        k: hxw, double\n        sf: down-scale factor\n    Return:\n        downsampled LR image\n    Reference:\n        @inproceedings{zhang2019deep,\n          title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},\n          author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n          booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n          pages={1671--1681},\n          year={2019}\n        }\n    '''\n    x = bicubic_degradation(x, sf=sf)\n    x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')\n    return x\n\n\ndef classical_degradation(x, k, sf=3):\n    ''' blur + downsampling\n    Args:\n        x: HxWxC image, [0, 1]/[0, 255]\n        k: hxw, double\n        sf: down-scale factor\n    Return:\n        downsampled LR image\n    '''\n    x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')\n    # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))\n    st = 0\n    return x[st::sf, st::sf, ...]\n\n\ndef add_sharpening(img, weight=0.5, radius=50, threshold=10):\n    \"\"\"USM sharpening. borrowed from real-ESRGAN\n    Input image: I; Blurry image: B.\n    1. K = I + weight * (I - B)\n    2. Mask = 1 if abs(I - B) > threshold, else: 0\n    3. Blur mask:\n    4. Out = Mask * K + (1 - Mask) * I\n    Args:\n        img (Numpy array): Input image, HWC, BGR; float32, [0, 1].\n        weight (float): Sharp weight. Default: 1.\n        radius (float): Kernel size of Gaussian blur. Default: 50.\n        threshold (int):\n    \"\"\"\n    if radius % 2 == 0:\n        radius += 1\n    blur = cv2.GaussianBlur(img, (radius, radius), 0)\n    residual = img - blur\n    mask = np.abs(residual) * 255 > threshold\n    mask = mask.astype('float32')\n    soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)\n\n    K = img + weight * residual\n    K = np.clip(K, 0, 1)\n    return soft_mask * K + (1 - soft_mask) * img\n\n\ndef add_blur(img, sf=4):\n    wd2 = 4.0 + sf\n    wd = 2.0 + 0.2 * sf\n\n    wd2 = wd2/4\n    wd = wd/4\n\n    if random.random() < 0.5:\n        l1 = wd2 * random.random()\n        l2 = wd2 * random.random()\n        k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)\n    else:\n        k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random())\n    img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror')\n\n    return img\n\n\ndef add_resize(img, sf=4):\n    rnum = np.random.rand()\n    if rnum > 0.8:  # up\n        sf1 = random.uniform(1, 2)\n    elif rnum < 0.7:  # down\n        sf1 = random.uniform(0.5 / sf, 1)\n    else:\n        sf1 = 1.0\n    img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))\n    img = np.clip(img, 0.0, 1.0)\n\n    return img\n\n\n# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):\n#     noise_level = random.randint(noise_level1, noise_level2)\n#     rnum = np.random.rand()\n#     if rnum > 0.6:  # add color Gaussian noise\n#         img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)\n#     elif rnum < 0.4:  # add grayscale Gaussian noise\n#         img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)\n#     else:  # add  noise\n#         L = noise_level2 / 255.\n#         D = np.diag(np.random.rand(3))\n#         U = orth(np.random.rand(3, 3))\n#         conv = np.dot(np.dot(np.transpose(U), D), U)\n#         img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)\n#     img = np.clip(img, 0.0, 1.0)\n#     return img\n\ndef add_Gaussian_noise(img, noise_level1=2, noise_level2=25):\n    noise_level = random.randint(noise_level1, noise_level2)\n    rnum = np.random.rand()\n    if rnum > 0.6:  # add color Gaussian noise\n        img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)\n    elif rnum < 0.4:  # add grayscale Gaussian noise\n        img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)\n    else:  # add  noise\n        L = noise_level2 / 255.\n        D = np.diag(np.random.rand(3))\n        U = orth(np.random.rand(3, 3))\n        conv = np.dot(np.dot(np.transpose(U), D), U)\n        img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)\n    img = np.clip(img, 0.0, 1.0)\n    return img\n\n\ndef add_speckle_noise(img, noise_level1=2, noise_level2=25):\n    noise_level = random.randint(noise_level1, noise_level2)\n    img = np.clip(img, 0.0, 1.0)\n    rnum = random.random()\n    if rnum > 0.6:\n        img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)\n    elif rnum < 0.4:\n        img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)\n    else:\n        L = noise_level2 / 255.\n        D = np.diag(np.random.rand(3))\n        U = orth(np.random.rand(3, 3))\n        conv = np.dot(np.dot(np.transpose(U), D), U)\n        img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)\n    img = np.clip(img, 0.0, 1.0)\n    return img\n\n\ndef add_Poisson_noise(img):\n    img = np.clip((img * 255.0).round(), 0, 255) / 255.\n    vals = 10 ** (2 * random.random() + 2.0)  # [2, 4]\n    if random.random() < 0.5:\n        img = np.random.poisson(img * vals).astype(np.float32) / vals\n    else:\n        img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])\n        img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.\n        noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray\n        img += noise_gray[:, :, np.newaxis]\n    img = np.clip(img, 0.0, 1.0)\n    return img\n\n\ndef add_JPEG_noise(img):\n    quality_factor = random.randint(80, 95)\n    img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)\n    result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])\n    img = cv2.imdecode(encimg, 1)\n    img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)\n    return img\n\n\ndef random_crop(lq, hq, sf=4, lq_patchsize=64):\n    h, w = lq.shape[:2]\n    rnd_h = random.randint(0, h - lq_patchsize)\n    rnd_w = random.randint(0, w - lq_patchsize)\n    lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]\n\n    rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)\n    hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]\n    return lq, hq\n\n\ndef degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):\n    \"\"\"\n    This is the degradation model of BSRGAN from the paper\n    \"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution\"\n    ----------\n    img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)\n    sf: scale factor\n    isp_model: camera ISP model\n    Returns\n    -------\n    img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]\n    hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]\n    \"\"\"\n    isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25\n    sf_ori = sf\n\n    h1, w1 = img.shape[:2]\n    img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...]  # mod crop\n    h, w = img.shape[:2]\n\n    if h < lq_patchsize * sf or w < lq_patchsize * sf:\n        raise ValueError(f'img size ({h1}X{w1}) is too small!')\n\n    hq = img.copy()\n\n    if sf == 4 and random.random() < scale2_prob:  # downsample1\n        if np.random.rand() < 0.5:\n            img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),\n                             interpolation=random.choice([1, 2, 3]))\n        else:\n            img = util.imresize_np(img, 1 / 2, True)\n        img = np.clip(img, 0.0, 1.0)\n        sf = 2\n\n    shuffle_order = random.sample(range(7), 7)\n    idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)\n    if idx1 > idx2:  # keep downsample3 last\n        shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]\n\n    for i in shuffle_order:\n\n        if i == 0:\n            img = add_blur(img, sf=sf)\n\n        elif i == 1:\n            img = add_blur(img, sf=sf)\n\n        elif i == 2:\n            a, b = img.shape[1], img.shape[0]\n            # downsample2\n            if random.random() < 0.75:\n                sf1 = random.uniform(1, 2 * sf)\n                img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),\n                                 interpolation=random.choice([1, 2, 3]))\n            else:\n                k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))\n                k_shifted = shift_pixel(k, sf)\n                k_shifted = k_shifted / k_shifted.sum()  # blur with shifted kernel\n                img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')\n                img = img[0::sf, 0::sf, ...]  # nearest downsampling\n            img = np.clip(img, 0.0, 1.0)\n\n        elif i == 3:\n            # downsample3\n            img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))\n            img = np.clip(img, 0.0, 1.0)\n\n        elif i == 4:\n            # add Gaussian noise\n            img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8)\n\n        elif i == 5:\n            # add JPEG noise\n            if random.random() < jpeg_prob:\n                img = add_JPEG_noise(img)\n\n        elif i == 6:\n            # add processed camera sensor noise\n            if random.random() < isp_prob and isp_model is not None:\n                with torch.no_grad():\n                    img, hq = isp_model.forward(img.copy(), hq)\n\n    # add final JPEG compression noise\n    img = add_JPEG_noise(img)\n\n    # random crop\n    img, hq = random_crop(img, hq, sf_ori, lq_patchsize)\n\n    return img, hq\n\n\n# todo no isp_model?\ndef degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False):\n    \"\"\"\n    This is the degradation model of BSRGAN from the paper\n    \"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution\"\n    ----------\n    sf: scale factor\n    isp_model: camera ISP model\n    Returns\n    -------\n    img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]\n    hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]\n    \"\"\"\n    image = util.uint2single(image)\n    isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25\n    sf_ori = sf\n\n    h1, w1 = image.shape[:2]\n    image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...]  # mod crop\n    h, w = image.shape[:2]\n\n    hq = image.copy()\n\n    if sf == 4 and random.random() < scale2_prob:  # downsample1\n        if np.random.rand() < 0.5:\n            image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),\n                               interpolation=random.choice([1, 2, 3]))\n        else:\n            image = util.imresize_np(image, 1 / 2, True)\n        image = np.clip(image, 0.0, 1.0)\n        sf = 2\n\n    shuffle_order = random.sample(range(7), 7)\n    idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)\n    if idx1 > idx2:  # keep downsample3 last\n        shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]\n\n    for i in shuffle_order:\n\n        if i == 0:\n            image = add_blur(image, sf=sf)\n\n        # elif i == 1:\n        #     image = add_blur(image, sf=sf)\n\n        if i == 0:\n            pass\n\n        elif i == 2:\n            a, b = image.shape[1], image.shape[0]\n            # downsample2\n            if random.random() < 0.8:\n                sf1 = random.uniform(1, 2 * sf)\n                image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),\n                                   interpolation=random.choice([1, 2, 3]))\n            else:\n                k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))\n                k_shifted = shift_pixel(k, sf)\n                k_shifted = k_shifted / k_shifted.sum()  # blur with shifted kernel\n                image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')\n                image = image[0::sf, 0::sf, ...]  # nearest downsampling\n\n            image = np.clip(image, 0.0, 1.0)\n\n        elif i == 3:\n            # downsample3\n            image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))\n            image = np.clip(image, 0.0, 1.0)\n\n        elif i == 4:\n            # add Gaussian noise\n            image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2)\n\n        elif i == 5:\n            # add JPEG noise\n            if random.random() < jpeg_prob:\n                image = add_JPEG_noise(image)\n        #\n        # elif i == 6:\n        #     # add processed camera sensor noise\n        #     if random.random() < isp_prob and isp_model is not None:\n        #         with torch.no_grad():\n        #             img, hq = isp_model.forward(img.copy(), hq)\n\n    # add final JPEG compression noise\n    image = add_JPEG_noise(image)\n    image = util.single2uint(image)\n    if up:\n        image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC)  # todo: random, as above? want to condition on it then\n    example = {\"image\": image}\n    return example\n\n\n\n\nif __name__ == '__main__':\n    print(\"hey\")\n    img = util.imread_uint('utils/test.png', 3)\n    img = img[:448, :448]\n    h = img.shape[0] // 4\n    print(\"resizing to\", h)\n    sf = 4\n    deg_fn = partial(degradation_bsrgan_variant, sf=sf)\n    for i in range(20):\n        print(i)\n        img_hq = img\n        img_lq = deg_fn(img)[\"image\"]\n        img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq)\n        print(img_lq)\n        img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)[\"image\"]\n        print(img_lq.shape)\n        print(\"bicubic\", img_lq_bicubic.shape)\n        print(img_hq.shape)\n        lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),\n                                interpolation=0)\n        lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic),\n                                        (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),\n                                        interpolation=0)\n        img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)\n        util.imsave(img_concat, str(i) + '.png')\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/image_degradation/utils_image.py",
    "content": "import os\nimport math\nimport random\nimport numpy as np\nimport torch\nimport cv2\nfrom torchvision.utils import make_grid\nfrom datetime import datetime\n#import matplotlib.pyplot as plt   # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py\n\n\nos.environ[\"KMP_DUPLICATE_LIB_OK\"]=\"TRUE\"\n\n\n'''\n# --------------------------------------------\n# Kai Zhang (github: https://github.com/cszn)\n# 03/Mar/2019\n# --------------------------------------------\n# https://github.com/twhui/SRGAN-pyTorch\n# https://github.com/xinntao/BasicSR\n# --------------------------------------------\n'''\n\n\nIMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']\n\n\ndef is_image_file(filename):\n    return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)\n\n\ndef get_timestamp():\n    return datetime.now().strftime('%y%m%d-%H%M%S')\n\n\ndef imshow(x, title=None, cbar=False, figsize=None):\n    plt.figure(figsize=figsize)\n    plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')\n    if title:\n        plt.title(title)\n    if cbar:\n        plt.colorbar()\n    plt.show()\n\n\ndef surf(Z, cmap='rainbow', figsize=None):\n    plt.figure(figsize=figsize)\n    ax3 = plt.axes(projection='3d')\n\n    w, h = Z.shape[:2]\n    xx = np.arange(0,w,1)\n    yy = np.arange(0,h,1)\n    X, Y = np.meshgrid(xx, yy)\n    ax3.plot_surface(X,Y,Z,cmap=cmap)\n    #ax3.contour(X,Y,Z, zdim='z',offset=-2，cmap=cmap)\n    plt.show()\n\n\n'''\n# --------------------------------------------\n# get image pathes\n# --------------------------------------------\n'''\n\n\ndef get_image_paths(dataroot):\n    paths = None  # return None if dataroot is None\n    if dataroot is not None:\n        paths = sorted(_get_paths_from_images(dataroot))\n    return paths\n\n\ndef _get_paths_from_images(path):\n    assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)\n    images = []\n    for dirpath, _, fnames in sorted(os.walk(path)):\n        for fname in sorted(fnames):\n            if is_image_file(fname):\n                img_path = os.path.join(dirpath, fname)\n                images.append(img_path)\n    assert images, '{:s} has no valid image file'.format(path)\n    return images\n\n\n'''\n# --------------------------------------------\n# split large images into small images \n# --------------------------------------------\n'''\n\n\ndef patches_from_image(img, p_size=512, p_overlap=64, p_max=800):\n    w, h = img.shape[:2]\n    patches = []\n    if w > p_max and h > p_max:\n        w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))\n        h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))\n        w1.append(w-p_size)\n        h1.append(h-p_size)\n#        print(w1)\n#        print(h1)\n        for i in w1:\n            for j in h1:\n                patches.append(img[i:i+p_size, j:j+p_size,:])\n    else:\n        patches.append(img)\n\n    return patches\n\n\ndef imssave(imgs, img_path):\n    \"\"\"\n    imgs: list, N images of size WxHxC\n    \"\"\"\n    img_name, ext = os.path.splitext(os.path.basename(img_path))\n\n    for i, img in enumerate(imgs):\n        if img.ndim == 3:\n            img = img[:, :, [2, 1, 0]]\n        new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')\n        cv2.imwrite(new_path, img)\n\n\ndef split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):\n    \"\"\"\n    split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),\n    and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)\n    will be splitted.\n    Args:\n        original_dataroot:\n        taget_dataroot:\n        p_size: size of small images\n        p_overlap: patch size in training is a good choice\n        p_max: images with smaller size than (p_max)x(p_max) keep unchanged.\n    \"\"\"\n    paths = get_image_paths(original_dataroot)\n    for img_path in paths:\n        # img_name, ext = os.path.splitext(os.path.basename(img_path))\n        img = imread_uint(img_path, n_channels=n_channels)\n        patches = patches_from_image(img, p_size, p_overlap, p_max)\n        imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))\n        #if original_dataroot == taget_dataroot:\n        #del img_path\n\n'''\n# --------------------------------------------\n# makedir\n# --------------------------------------------\n'''\n\n\ndef mkdir(path):\n    if not os.path.exists(path):\n        os.makedirs(path)\n\n\ndef mkdirs(paths):\n    if isinstance(paths, str):\n        mkdir(paths)\n    else:\n        for path in paths:\n            mkdir(path)\n\n\ndef mkdir_and_rename(path):\n    if os.path.exists(path):\n        new_name = path + '_archived_' + get_timestamp()\n        print('Path already exists. Rename it to [{:s}]'.format(new_name))\n        os.rename(path, new_name)\n    os.makedirs(path)\n\n\n'''\n# --------------------------------------------\n# read image from path\n# opencv is fast, but read BGR numpy image\n# --------------------------------------------\n'''\n\n\n# --------------------------------------------\n# get uint8 image of size HxWxn_channles (RGB)\n# --------------------------------------------\ndef imread_uint(path, n_channels=3):\n    #  input: path\n    # output: HxWx3(RGB or GGG), or HxWx1 (G)\n    if n_channels == 1:\n        img = cv2.imread(path, 0)  # cv2.IMREAD_GRAYSCALE\n        img = np.expand_dims(img, axis=2)  # HxWx1\n    elif n_channels == 3:\n        img = cv2.imread(path, cv2.IMREAD_UNCHANGED)  # BGR or G\n        if img.ndim == 2:\n            img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)  # GGG\n        else:\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)  # RGB\n    return img\n\n\n# --------------------------------------------\n# matlab's imwrite\n# --------------------------------------------\ndef imsave(img, img_path):\n    img = np.squeeze(img)\n    if img.ndim == 3:\n        img = img[:, :, [2, 1, 0]]\n    cv2.imwrite(img_path, img)\n\ndef imwrite(img, img_path):\n    img = np.squeeze(img)\n    if img.ndim == 3:\n        img = img[:, :, [2, 1, 0]]\n    cv2.imwrite(img_path, img)\n\n\n\n# --------------------------------------------\n# get single image of size HxWxn_channles (BGR)\n# --------------------------------------------\ndef read_img(path):\n    # read image by cv2\n    # return: Numpy float32, HWC, BGR, [0,1]\n    img = cv2.imread(path, cv2.IMREAD_UNCHANGED)  # cv2.IMREAD_GRAYSCALE\n    img = img.astype(np.float32) / 255.\n    if img.ndim == 2:\n        img = np.expand_dims(img, axis=2)\n    # some images have 4 channels\n    if img.shape[2] > 3:\n        img = img[:, :, :3]\n    return img\n\n\n'''\n# --------------------------------------------\n# image format conversion\n# --------------------------------------------\n# numpy(single) <--->  numpy(unit)\n# numpy(single) <--->  tensor\n# numpy(unit)   <--->  tensor\n# --------------------------------------------\n'''\n\n\n# --------------------------------------------\n# numpy(single) [0, 1] <--->  numpy(unit)\n# --------------------------------------------\n\n\ndef uint2single(img):\n\n    return np.float32(img/255.)\n\n\ndef single2uint(img):\n\n    return np.uint8((img.clip(0, 1)*255.).round())\n\n\ndef uint162single(img):\n\n    return np.float32(img/65535.)\n\n\ndef single2uint16(img):\n\n    return np.uint16((img.clip(0, 1)*65535.).round())\n\n\n# --------------------------------------------\n# numpy(unit) (HxWxC or HxW) <--->  tensor\n# --------------------------------------------\n\n\n# convert uint to 4-dimensional torch tensor\ndef uint2tensor4(img):\n    if img.ndim == 2:\n        img = np.expand_dims(img, axis=2)\n    return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)\n\n\n# convert uint to 3-dimensional torch tensor\ndef uint2tensor3(img):\n    if img.ndim == 2:\n        img = np.expand_dims(img, axis=2)\n    return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)\n\n\n# convert 2/3/4-dimensional torch tensor to uint\ndef tensor2uint(img):\n    img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()\n    if img.ndim == 3:\n        img = np.transpose(img, (1, 2, 0))\n    return np.uint8((img*255.0).round())\n\n\n# --------------------------------------------\n# numpy(single) (HxWxC) <--->  tensor\n# --------------------------------------------\n\n\n# convert single (HxWxC) to 3-dimensional torch tensor\ndef single2tensor3(img):\n    return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()\n\n\n# convert single (HxWxC) to 4-dimensional torch tensor\ndef single2tensor4(img):\n    return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)\n\n\n# convert torch tensor to single\ndef tensor2single(img):\n    img = img.data.squeeze().float().cpu().numpy()\n    if img.ndim == 3:\n        img = np.transpose(img, (1, 2, 0))\n\n    return img\n\n# convert torch tensor to single\ndef tensor2single3(img):\n    img = img.data.squeeze().float().cpu().numpy()\n    if img.ndim == 3:\n        img = np.transpose(img, (1, 2, 0))\n    elif img.ndim == 2:\n        img = np.expand_dims(img, axis=2)\n    return img\n\n\ndef single2tensor5(img):\n    return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)\n\n\ndef single32tensor5(img):\n    return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)\n\n\ndef single42tensor4(img):\n    return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()\n\n\n# from skimage.io import imread, imsave\ndef tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):\n    '''\n    Converts a torch Tensor into an image Numpy array of BGR channel order\n    Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order\n    Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)\n    '''\n    tensor = tensor.squeeze().float().cpu().clamp_(*min_max)  # squeeze first, then clamp\n    tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0])  # to range [0,1]\n    n_dim = tensor.dim()\n    if n_dim == 4:\n        n_img = len(tensor)\n        img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()\n        img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0))  # HWC, BGR\n    elif n_dim == 3:\n        img_np = tensor.numpy()\n        img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0))  # HWC, BGR\n    elif n_dim == 2:\n        img_np = tensor.numpy()\n    else:\n        raise TypeError(\n            'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))\n    if out_type == np.uint8:\n        img_np = (img_np * 255.0).round()\n        # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.\n    return img_np.astype(out_type)\n\n\n'''\n# --------------------------------------------\n# Augmentation, flipe and/or rotate\n# --------------------------------------------\n# The following two are enough.\n# (1) augmet_img: numpy image of WxHxC or WxH\n# (2) augment_img_tensor4: tensor image 1xCxWxH\n# --------------------------------------------\n'''\n\n\ndef augment_img(img, mode=0):\n    '''Kai Zhang (github: https://github.com/cszn)\n    '''\n    if mode == 0:\n        return img\n    elif mode == 1:\n        return np.flipud(np.rot90(img))\n    elif mode == 2:\n        return np.flipud(img)\n    elif mode == 3:\n        return np.rot90(img, k=3)\n    elif mode == 4:\n        return np.flipud(np.rot90(img, k=2))\n    elif mode == 5:\n        return np.rot90(img)\n    elif mode == 6:\n        return np.rot90(img, k=2)\n    elif mode == 7:\n        return np.flipud(np.rot90(img, k=3))\n\n\ndef augment_img_tensor4(img, mode=0):\n    '''Kai Zhang (github: https://github.com/cszn)\n    '''\n    if mode == 0:\n        return img\n    elif mode == 1:\n        return img.rot90(1, [2, 3]).flip([2])\n    elif mode == 2:\n        return img.flip([2])\n    elif mode == 3:\n        return img.rot90(3, [2, 3])\n    elif mode == 4:\n        return img.rot90(2, [2, 3]).flip([2])\n    elif mode == 5:\n        return img.rot90(1, [2, 3])\n    elif mode == 6:\n        return img.rot90(2, [2, 3])\n    elif mode == 7:\n        return img.rot90(3, [2, 3]).flip([2])\n\n\ndef augment_img_tensor(img, mode=0):\n    '''Kai Zhang (github: https://github.com/cszn)\n    '''\n    img_size = img.size()\n    img_np = img.data.cpu().numpy()\n    if len(img_size) == 3:\n        img_np = np.transpose(img_np, (1, 2, 0))\n    elif len(img_size) == 4:\n        img_np = np.transpose(img_np, (2, 3, 1, 0))\n    img_np = augment_img(img_np, mode=mode)\n    img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))\n    if len(img_size) == 3:\n        img_tensor = img_tensor.permute(2, 0, 1)\n    elif len(img_size) == 4:\n        img_tensor = img_tensor.permute(3, 2, 0, 1)\n\n    return img_tensor.type_as(img)\n\n\ndef augment_img_np3(img, mode=0):\n    if mode == 0:\n        return img\n    elif mode == 1:\n        return img.transpose(1, 0, 2)\n    elif mode == 2:\n        return img[::-1, :, :]\n    elif mode == 3:\n        img = img[::-1, :, :]\n        img = img.transpose(1, 0, 2)\n        return img\n    elif mode == 4:\n        return img[:, ::-1, :]\n    elif mode == 5:\n        img = img[:, ::-1, :]\n        img = img.transpose(1, 0, 2)\n        return img\n    elif mode == 6:\n        img = img[:, ::-1, :]\n        img = img[::-1, :, :]\n        return img\n    elif mode == 7:\n        img = img[:, ::-1, :]\n        img = img[::-1, :, :]\n        img = img.transpose(1, 0, 2)\n        return img\n\n\ndef augment_imgs(img_list, hflip=True, rot=True):\n    # horizontal flip OR rotate\n    hflip = hflip and random.random() < 0.5\n    vflip = rot and random.random() < 0.5\n    rot90 = rot and random.random() < 0.5\n\n    def _augment(img):\n        if hflip:\n            img = img[:, ::-1, :]\n        if vflip:\n            img = img[::-1, :, :]\n        if rot90:\n            img = img.transpose(1, 0, 2)\n        return img\n\n    return [_augment(img) for img in img_list]\n\n\n'''\n# --------------------------------------------\n# modcrop and shave\n# --------------------------------------------\n'''\n\n\ndef modcrop(img_in, scale):\n    # img_in: Numpy, HWC or HW\n    img = np.copy(img_in)\n    if img.ndim == 2:\n        H, W = img.shape\n        H_r, W_r = H % scale, W % scale\n        img = img[:H - H_r, :W - W_r]\n    elif img.ndim == 3:\n        H, W, C = img.shape\n        H_r, W_r = H % scale, W % scale\n        img = img[:H - H_r, :W - W_r, :]\n    else:\n        raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))\n    return img\n\n\ndef shave(img_in, border=0):\n    # img_in: Numpy, HWC or HW\n    img = np.copy(img_in)\n    h, w = img.shape[:2]\n    img = img[border:h-border, border:w-border]\n    return img\n\n\n'''\n# --------------------------------------------\n# image processing process on numpy image\n# channel_convert(in_c, tar_type, img_list):\n# rgb2ycbcr(img, only_y=True):\n# bgr2ycbcr(img, only_y=True):\n# ycbcr2rgb(img):\n# --------------------------------------------\n'''\n\n\ndef rgb2ycbcr(img, only_y=True):\n    '''same as matlab rgb2ycbcr\n    only_y: only return Y channel\n    Input:\n        uint8, [0, 255]\n        float, [0, 1]\n    '''\n    in_img_type = img.dtype\n    img.astype(np.float32)\n    if in_img_type != np.uint8:\n        img *= 255.\n    # convert\n    if only_y:\n        rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0\n    else:\n        rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],\n                              [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]\n    if in_img_type == np.uint8:\n        rlt = rlt.round()\n    else:\n        rlt /= 255.\n    return rlt.astype(in_img_type)\n\n\ndef ycbcr2rgb(img):\n    '''same as matlab ycbcr2rgb\n    Input:\n        uint8, [0, 255]\n        float, [0, 1]\n    '''\n    in_img_type = img.dtype\n    img.astype(np.float32)\n    if in_img_type != np.uint8:\n        img *= 255.\n    # convert\n    rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],\n                          [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]\n    if in_img_type == np.uint8:\n        rlt = rlt.round()\n    else:\n        rlt /= 255.\n    return rlt.astype(in_img_type)\n\n\ndef bgr2ycbcr(img, only_y=True):\n    '''bgr version of rgb2ycbcr\n    only_y: only return Y channel\n    Input:\n        uint8, [0, 255]\n        float, [0, 1]\n    '''\n    in_img_type = img.dtype\n    img.astype(np.float32)\n    if in_img_type != np.uint8:\n        img *= 255.\n    # convert\n    if only_y:\n        rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0\n    else:\n        rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],\n                              [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]\n    if in_img_type == np.uint8:\n        rlt = rlt.round()\n    else:\n        rlt /= 255.\n    return rlt.astype(in_img_type)\n\n\ndef channel_convert(in_c, tar_type, img_list):\n    # conversion among BGR, gray and y\n    if in_c == 3 and tar_type == 'gray':  # BGR to gray\n        gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]\n        return [np.expand_dims(img, axis=2) for img in gray_list]\n    elif in_c == 3 and tar_type == 'y':  # BGR to y\n        y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]\n        return [np.expand_dims(img, axis=2) for img in y_list]\n    elif in_c == 1 and tar_type == 'RGB':  # gray/y to BGR\n        return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]\n    else:\n        return img_list\n\n\n'''\n# --------------------------------------------\n# metric, PSNR and SSIM\n# --------------------------------------------\n'''\n\n\n# --------------------------------------------\n# PSNR\n# --------------------------------------------\ndef calculate_psnr(img1, img2, border=0):\n    # img1 and img2 have range [0, 255]\n    #img1 = img1.squeeze()\n    #img2 = img2.squeeze()\n    if not img1.shape == img2.shape:\n        raise ValueError('Input images must have the same dimensions.')\n    h, w = img1.shape[:2]\n    img1 = img1[border:h-border, border:w-border]\n    img2 = img2[border:h-border, border:w-border]\n\n    img1 = img1.astype(np.float64)\n    img2 = img2.astype(np.float64)\n    mse = np.mean((img1 - img2)**2)\n    if mse == 0:\n        return float('inf')\n    return 20 * math.log10(255.0 / math.sqrt(mse))\n\n\n# --------------------------------------------\n# SSIM\n# --------------------------------------------\ndef calculate_ssim(img1, img2, border=0):\n    '''calculate SSIM\n    the same outputs as MATLAB's\n    img1, img2: [0, 255]\n    '''\n    #img1 = img1.squeeze()\n    #img2 = img2.squeeze()\n    if not img1.shape == img2.shape:\n        raise ValueError('Input images must have the same dimensions.')\n    h, w = img1.shape[:2]\n    img1 = img1[border:h-border, border:w-border]\n    img2 = img2[border:h-border, border:w-border]\n\n    if img1.ndim == 2:\n        return ssim(img1, img2)\n    elif img1.ndim == 3:\n        if img1.shape[2] == 3:\n            ssims = []\n            for i in range(3):\n                ssims.append(ssim(img1[:,:,i], img2[:,:,i]))\n            return np.array(ssims).mean()\n        elif img1.shape[2] == 1:\n            return ssim(np.squeeze(img1), np.squeeze(img2))\n    else:\n        raise ValueError('Wrong input image dimensions.')\n\n\ndef ssim(img1, img2):\n    C1 = (0.01 * 255)**2\n    C2 = (0.03 * 255)**2\n\n    img1 = img1.astype(np.float64)\n    img2 = img2.astype(np.float64)\n    kernel = cv2.getGaussianKernel(11, 1.5)\n    window = np.outer(kernel, kernel.transpose())\n\n    mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]  # valid\n    mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]\n    mu1_sq = mu1**2\n    mu2_sq = mu2**2\n    mu1_mu2 = mu1 * mu2\n    sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq\n    sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq\n    sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2\n\n    ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *\n                                                            (sigma1_sq + sigma2_sq + C2))\n    return ssim_map.mean()\n\n\n'''\n# --------------------------------------------\n# matlab's bicubic imresize (numpy and torch) [0, 1]\n# --------------------------------------------\n'''\n\n\n# matlab 'imresize' function, now only support 'bicubic'\ndef cubic(x):\n    absx = torch.abs(x)\n    absx2 = absx**2\n    absx3 = absx**3\n    return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \\\n        (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))\n\n\ndef calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):\n    if (scale < 1) and (antialiasing):\n        # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width\n        kernel_width = kernel_width / scale\n\n    # Output-space coordinates\n    x = torch.linspace(1, out_length, out_length)\n\n    # Input-space coordinates. Calculate the inverse mapping such that 0.5\n    # in output space maps to 0.5 in input space, and 0.5+scale in output\n    # space maps to 1.5 in input space.\n    u = x / scale + 0.5 * (1 - 1 / scale)\n\n    # What is the left-most pixel that can be involved in the computation?\n    left = torch.floor(u - kernel_width / 2)\n\n    # What is the maximum number of pixels that can be involved in the\n    # computation?  Note: it's OK to use an extra pixel here; if the\n    # corresponding weights are all zero, it will be eliminated at the end\n    # of this function.\n    P = math.ceil(kernel_width) + 2\n\n    # The indices of the input pixels involved in computing the k-th output\n    # pixel are in row k of the indices matrix.\n    indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(\n        1, P).expand(out_length, P)\n\n    # The weights used to compute the k-th output pixel are in row k of the\n    # weights matrix.\n    distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices\n    # apply cubic kernel\n    if (scale < 1) and (antialiasing):\n        weights = scale * cubic(distance_to_center * scale)\n    else:\n        weights = cubic(distance_to_center)\n    # Normalize the weights matrix so that each row sums to 1.\n    weights_sum = torch.sum(weights, 1).view(out_length, 1)\n    weights = weights / weights_sum.expand(out_length, P)\n\n    # If a column in weights is all zero, get rid of it. only consider the first and last column.\n    weights_zero_tmp = torch.sum((weights == 0), 0)\n    if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):\n        indices = indices.narrow(1, 1, P - 2)\n        weights = weights.narrow(1, 1, P - 2)\n    if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):\n        indices = indices.narrow(1, 0, P - 2)\n        weights = weights.narrow(1, 0, P - 2)\n    weights = weights.contiguous()\n    indices = indices.contiguous()\n    sym_len_s = -indices.min() + 1\n    sym_len_e = indices.max() - in_length\n    indices = indices + sym_len_s - 1\n    return weights, indices, int(sym_len_s), int(sym_len_e)\n\n\n# --------------------------------------------\n# imresize for tensor image [0, 1]\n# --------------------------------------------\ndef imresize(img, scale, antialiasing=True):\n    # Now the scale should be the same for H and W\n    # input: img: pytorch tensor, CHW or HW [0,1]\n    # output: CHW or HW [0,1] w/o round\n    need_squeeze = True if img.dim() == 2 else False\n    if need_squeeze:\n        img.unsqueeze_(0)\n    in_C, in_H, in_W = img.size()\n    out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)\n    kernel_width = 4\n    kernel = 'cubic'\n\n    # Return the desired dimension order for performing the resize.  The\n    # strategy is to perform the resize first along the dimension with the\n    # smallest scale factor.\n    # Now we do not support this.\n\n    # get weights and indices\n    weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(\n        in_H, out_H, scale, kernel, kernel_width, antialiasing)\n    weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(\n        in_W, out_W, scale, kernel, kernel_width, antialiasing)\n    # process H dimension\n    # symmetric copying\n    img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)\n    img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)\n\n    sym_patch = img[:, :sym_len_Hs, :]\n    inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(1, inv_idx)\n    img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)\n\n    sym_patch = img[:, -sym_len_He:, :]\n    inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(1, inv_idx)\n    img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)\n\n    out_1 = torch.FloatTensor(in_C, out_H, in_W)\n    kernel_width = weights_H.size(1)\n    for i in range(out_H):\n        idx = int(indices_H[i][0])\n        for j in range(out_C):\n            out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])\n\n    # process W dimension\n    # symmetric copying\n    out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)\n    out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)\n\n    sym_patch = out_1[:, :, :sym_len_Ws]\n    inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(2, inv_idx)\n    out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)\n\n    sym_patch = out_1[:, :, -sym_len_We:]\n    inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(2, inv_idx)\n    out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)\n\n    out_2 = torch.FloatTensor(in_C, out_H, out_W)\n    kernel_width = weights_W.size(1)\n    for i in range(out_W):\n        idx = int(indices_W[i][0])\n        for j in range(out_C):\n            out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])\n    if need_squeeze:\n        out_2.squeeze_()\n    return out_2\n\n\n# --------------------------------------------\n# imresize for numpy image [0, 1]\n# --------------------------------------------\ndef imresize_np(img, scale, antialiasing=True):\n    # Now the scale should be the same for H and W\n    # input: img: Numpy, HWC or HW [0,1]\n    # output: HWC or HW [0,1] w/o round\n    img = torch.from_numpy(img)\n    need_squeeze = True if img.dim() == 2 else False\n    if need_squeeze:\n        img.unsqueeze_(2)\n\n    in_H, in_W, in_C = img.size()\n    out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)\n    kernel_width = 4\n    kernel = 'cubic'\n\n    # Return the desired dimension order for performing the resize.  The\n    # strategy is to perform the resize first along the dimension with the\n    # smallest scale factor.\n    # Now we do not support this.\n\n    # get weights and indices\n    weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(\n        in_H, out_H, scale, kernel, kernel_width, antialiasing)\n    weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(\n        in_W, out_W, scale, kernel, kernel_width, antialiasing)\n    # process H dimension\n    # symmetric copying\n    img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)\n    img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)\n\n    sym_patch = img[:sym_len_Hs, :, :]\n    inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(0, inv_idx)\n    img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)\n\n    sym_patch = img[-sym_len_He:, :, :]\n    inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(0, inv_idx)\n    img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)\n\n    out_1 = torch.FloatTensor(out_H, in_W, in_C)\n    kernel_width = weights_H.size(1)\n    for i in range(out_H):\n        idx = int(indices_H[i][0])\n        for j in range(out_C):\n            out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])\n\n    # process W dimension\n    # symmetric copying\n    out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)\n    out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)\n\n    sym_patch = out_1[:, :sym_len_Ws, :]\n    inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(1, inv_idx)\n    out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)\n\n    sym_patch = out_1[:, -sym_len_We:, :]\n    inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(1, inv_idx)\n    out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)\n\n    out_2 = torch.FloatTensor(out_H, out_W, in_C)\n    kernel_width = weights_W.size(1)\n    for i in range(out_W):\n        idx = int(indices_W[i][0])\n        for j in range(out_C):\n            out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])\n    if need_squeeze:\n        out_2.squeeze_()\n\n    return out_2.numpy()\n\n\nif __name__ == '__main__':\n    print('---')\n#    img = imread_uint('test.bmp', 3)\n#    img = uint2single(img)\n#    img_bicubic = imresize_np(img, 1/4)"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/api.py",
    "content": "# based on https://github.com/isl-org/MiDaS\n\nimport cv2\nimport torch\nimport torch.nn as nn\nfrom torchvision.transforms import Compose\n\nfrom ToonCrafter.ldm.modules.midas.midas.dpt_depth import DPTDepthModel\nfrom ToonCrafter.ldm.modules.midas.midas.midas_net import MidasNet\nfrom ToonCrafter.ldm.modules.midas.midas.midas_net_custom import MidasNet_small\nfrom ToonCrafter.ldm.modules.midas.midas.transforms import Resize, NormalizeImage, PrepareForNet\n\n\nISL_PATHS = {\n    \"dpt_large\": \"midas_models/dpt_large-midas-2f21e586.pt\",\n    \"dpt_hybrid\": \"midas_models/dpt_hybrid-midas-501f0c75.pt\",\n    \"midas_v21\": \"\",\n    \"midas_v21_small\": \"\",\n}\n\n\ndef disabled_train(self, mode=True):\n    \"\"\"Overwrite model.train with this function to make sure train/eval mode\n    does not change anymore.\"\"\"\n    return self\n\n\ndef load_midas_transform(model_type):\n    # https://github.com/isl-org/MiDaS/blob/master/run.py\n    # load transform only\n    if model_type == \"dpt_large\":  # DPT-Large\n        net_w, net_h = 384, 384\n        resize_mode = \"minimal\"\n        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n\n    elif model_type == \"dpt_hybrid\":  # DPT-Hybrid\n        net_w, net_h = 384, 384\n        resize_mode = \"minimal\"\n        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n\n    elif model_type == \"midas_v21\":\n        net_w, net_h = 384, 384\n        resize_mode = \"upper_bound\"\n        normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n\n    elif model_type == \"midas_v21_small\":\n        net_w, net_h = 256, 256\n        resize_mode = \"upper_bound\"\n        normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n\n    else:\n        assert False, f\"model_type '{model_type}' not implemented, use: --model_type large\"\n\n    transform = Compose(\n        [\n            Resize(\n                net_w,\n                net_h,\n                resize_target=None,\n                keep_aspect_ratio=True,\n                ensure_multiple_of=32,\n                resize_method=resize_mode,\n                image_interpolation_method=cv2.INTER_CUBIC,\n            ),\n            normalization,\n            PrepareForNet(),\n        ]\n    )\n\n    return transform\n\n\ndef load_model(model_type):\n    # https://github.com/isl-org/MiDaS/blob/master/run.py\n    # load network\n    model_path = ISL_PATHS[model_type]\n    if model_type == \"dpt_large\":  # DPT-Large\n        model = DPTDepthModel(\n            path=model_path,\n            backbone=\"vitl16_384\",\n            non_negative=True,\n        )\n        net_w, net_h = 384, 384\n        resize_mode = \"minimal\"\n        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n\n    elif model_type == \"dpt_hybrid\":  # DPT-Hybrid\n        model = DPTDepthModel(\n            path=model_path,\n            backbone=\"vitb_rn50_384\",\n            non_negative=True,\n        )\n        net_w, net_h = 384, 384\n        resize_mode = \"minimal\"\n        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n\n    elif model_type == \"midas_v21\":\n        model = MidasNet(model_path, non_negative=True)\n        net_w, net_h = 384, 384\n        resize_mode = \"upper_bound\"\n        normalization = NormalizeImage(\n            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]\n        )\n\n    elif model_type == \"midas_v21_small\":\n        model = MidasNet_small(model_path, features=64, backbone=\"efficientnet_lite3\", exportable=True,\n                               non_negative=True, blocks={'expand': True})\n        net_w, net_h = 256, 256\n        resize_mode = \"upper_bound\"\n        normalization = NormalizeImage(\n            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]\n        )\n\n    else:\n        print(f\"model_type '{model_type}' not implemented, use: --model_type large\")\n        assert False\n\n    transform = Compose(\n        [\n            Resize(\n                net_w,\n                net_h,\n                resize_target=None,\n                keep_aspect_ratio=True,\n                ensure_multiple_of=32,\n                resize_method=resize_mode,\n                image_interpolation_method=cv2.INTER_CUBIC,\n            ),\n            normalization,\n            PrepareForNet(),\n        ]\n    )\n\n    return model.eval(), transform\n\n\nclass MiDaSInference(nn.Module):\n    MODEL_TYPES_TORCH_HUB = [\n        \"DPT_Large\",\n        \"DPT_Hybrid\",\n        \"MiDaS_small\"\n    ]\n    MODEL_TYPES_ISL = [\n        \"dpt_large\",\n        \"dpt_hybrid\",\n        \"midas_v21\",\n        \"midas_v21_small\",\n    ]\n\n    def __init__(self, model_type):\n        super().__init__()\n        assert (model_type in self.MODEL_TYPES_ISL)\n        model, _ = load_model(model_type)\n        self.model = model\n        self.model.train = disabled_train\n\n    def forward(self, x):\n        # x in 0..1 as produced by calling self.transform on a 0..1 float64 numpy array\n        # NOTE: we expect that the correct transform has been called during dataloading.\n        with torch.no_grad():\n            prediction = self.model(x)\n            prediction = torch.nn.functional.interpolate(\n                prediction.unsqueeze(1),\n                size=x.shape[2:],\n                mode=\"bicubic\",\n                align_corners=False,\n            )\n        assert prediction.shape == (x.shape[0], 1, x.shape[2], x.shape[3])\n        return prediction\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/base_model.py",
    "content": "import torch\n\n\nclass BaseModel(torch.nn.Module):\n    def load(self, path):\n        \"\"\"Load model from file.\n\n        Args:\n            path (str): file path\n        \"\"\"\n        parameters = torch.load(path, map_location=torch.device('cpu'))\n\n        if \"optimizer\" in parameters:\n            parameters = parameters[\"model\"]\n\n        self.load_state_dict(parameters)\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/blocks.py",
    "content": "import torch\nimport torch.nn as nn\n\nfrom .vit import (\n    _make_pretrained_vitb_rn50_384,\n    _make_pretrained_vitl16_384,\n    _make_pretrained_vitb16_384,\n    forward_vit,\n)\n\ndef _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout=\"ignore\",):\n    if backbone == \"vitl16_384\":\n        pretrained = _make_pretrained_vitl16_384(\n            use_pretrained, hooks=hooks, use_readout=use_readout\n        )\n        scratch = _make_scratch(\n            [256, 512, 1024, 1024], features, groups=groups, expand=expand\n        )  # ViT-L/16 - 85.0% Top1 (backbone)\n    elif backbone == \"vitb_rn50_384\":\n        pretrained = _make_pretrained_vitb_rn50_384(\n            use_pretrained,\n            hooks=hooks,\n            use_vit_only=use_vit_only,\n            use_readout=use_readout,\n        )\n        scratch = _make_scratch(\n            [256, 512, 768, 768], features, groups=groups, expand=expand\n        )  # ViT-H/16 - 85.0% Top1 (backbone)\n    elif backbone == \"vitb16_384\":\n        pretrained = _make_pretrained_vitb16_384(\n            use_pretrained, hooks=hooks, use_readout=use_readout\n        )\n        scratch = _make_scratch(\n            [96, 192, 384, 768], features, groups=groups, expand=expand\n        )  # ViT-B/16 - 84.6% Top1 (backbone)\n    elif backbone == \"resnext101_wsl\":\n        pretrained = _make_pretrained_resnext101_wsl(use_pretrained)\n        scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand)     # efficientnet_lite3  \n    elif backbone == \"efficientnet_lite3\":\n        pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)\n        scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand)  # efficientnet_lite3     \n    else:\n        print(f\"Backbone '{backbone}' not implemented\")\n        assert False\n        \n    return pretrained, scratch\n\n\ndef _make_scratch(in_shape, out_shape, groups=1, expand=False):\n    scratch = nn.Module()\n\n    out_shape1 = out_shape\n    out_shape2 = out_shape\n    out_shape3 = out_shape\n    out_shape4 = out_shape\n    if expand==True:\n        out_shape1 = out_shape\n        out_shape2 = out_shape*2\n        out_shape3 = out_shape*4\n        out_shape4 = out_shape*8\n\n    scratch.layer1_rn = nn.Conv2d(\n        in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups\n    )\n    scratch.layer2_rn = nn.Conv2d(\n        in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups\n    )\n    scratch.layer3_rn = nn.Conv2d(\n        in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups\n    )\n    scratch.layer4_rn = nn.Conv2d(\n        in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups\n    )\n\n    return scratch\n\n\ndef _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):\n    efficientnet = torch.hub.load(\n        \"rwightman/gen-efficientnet-pytorch\",\n        \"tf_efficientnet_lite3\",\n        pretrained=use_pretrained,\n        exportable=exportable\n    )\n    return _make_efficientnet_backbone(efficientnet)\n\n\ndef _make_efficientnet_backbone(effnet):\n    pretrained = nn.Module()\n\n    pretrained.layer1 = nn.Sequential(\n        effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]\n    )\n    pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])\n    pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])\n    pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])\n\n    return pretrained\n    \n\ndef _make_resnet_backbone(resnet):\n    pretrained = nn.Module()\n    pretrained.layer1 = nn.Sequential(\n        resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1\n    )\n\n    pretrained.layer2 = resnet.layer2\n    pretrained.layer3 = resnet.layer3\n    pretrained.layer4 = resnet.layer4\n\n    return pretrained\n\n\ndef _make_pretrained_resnext101_wsl(use_pretrained):\n    resnet = torch.hub.load(\"facebookresearch/WSL-Images\", \"resnext101_32x8d_wsl\")\n    return _make_resnet_backbone(resnet)\n\n\n\nclass Interpolate(nn.Module):\n    \"\"\"Interpolation module.\n    \"\"\"\n\n    def __init__(self, scale_factor, mode, align_corners=False):\n        \"\"\"Init.\n\n        Args:\n            scale_factor (float): scaling\n            mode (str): interpolation mode\n        \"\"\"\n        super(Interpolate, self).__init__()\n\n        self.interp = nn.functional.interpolate\n        self.scale_factor = scale_factor\n        self.mode = mode\n        self.align_corners = align_corners\n\n    def forward(self, x):\n        \"\"\"Forward pass.\n\n        Args:\n            x (tensor): input\n\n        Returns:\n            tensor: interpolated data\n        \"\"\"\n\n        x = self.interp(\n            x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners\n        )\n\n        return x\n\n\nclass ResidualConvUnit(nn.Module):\n    \"\"\"Residual convolution module.\n    \"\"\"\n\n    def __init__(self, features):\n        \"\"\"Init.\n\n        Args:\n            features (int): number of features\n        \"\"\"\n        super().__init__()\n\n        self.conv1 = nn.Conv2d(\n            features, features, kernel_size=3, stride=1, padding=1, bias=True\n        )\n\n        self.conv2 = nn.Conv2d(\n            features, features, kernel_size=3, stride=1, padding=1, bias=True\n        )\n\n        self.relu = nn.ReLU(inplace=True)\n\n    def forward(self, x):\n        \"\"\"Forward pass.\n\n        Args:\n            x (tensor): input\n\n        Returns:\n            tensor: output\n        \"\"\"\n        out = self.relu(x)\n        out = self.conv1(out)\n        out = self.relu(out)\n        out = self.conv2(out)\n\n        return out + x\n\n\nclass FeatureFusionBlock(nn.Module):\n    \"\"\"Feature fusion block.\n    \"\"\"\n\n    def __init__(self, features):\n        \"\"\"Init.\n\n        Args:\n            features (int): number of features\n        \"\"\"\n        super(FeatureFusionBlock, self).__init__()\n\n        self.resConfUnit1 = ResidualConvUnit(features)\n        self.resConfUnit2 = ResidualConvUnit(features)\n\n    def forward(self, *xs):\n        \"\"\"Forward pass.\n\n        Returns:\n            tensor: output\n        \"\"\"\n        output = xs[0]\n\n        if len(xs) == 2:\n            output += self.resConfUnit1(xs[1])\n\n        output = self.resConfUnit2(output)\n\n        output = nn.functional.interpolate(\n            output, scale_factor=2, mode=\"bilinear\", align_corners=True\n        )\n\n        return output\n\n\n\n\nclass ResidualConvUnit_custom(nn.Module):\n    \"\"\"Residual convolution module.\n    \"\"\"\n\n    def __init__(self, features, activation, bn):\n        \"\"\"Init.\n\n        Args:\n            features (int): number of features\n        \"\"\"\n        super().__init__()\n\n        self.bn = bn\n\n        self.groups=1\n\n        self.conv1 = nn.Conv2d(\n            features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups\n        )\n        \n        self.conv2 = nn.Conv2d(\n            features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups\n        )\n\n        if self.bn==True:\n            self.bn1 = nn.BatchNorm2d(features)\n            self.bn2 = nn.BatchNorm2d(features)\n\n        self.activation = activation\n\n        self.skip_add = nn.quantized.FloatFunctional()\n\n    def forward(self, x):\n        \"\"\"Forward pass.\n\n        Args:\n            x (tensor): input\n\n        Returns:\n            tensor: output\n        \"\"\"\n        \n        out = self.activation(x)\n        out = self.conv1(out)\n        if self.bn==True:\n            out = self.bn1(out)\n       \n        out = self.activation(out)\n        out = self.conv2(out)\n        if self.bn==True:\n            out = self.bn2(out)\n\n        if self.groups > 1:\n            out = self.conv_merge(out)\n\n        return self.skip_add.add(out, x)\n\n        # return out + x\n\n\nclass FeatureFusionBlock_custom(nn.Module):\n    \"\"\"Feature fusion block.\n    \"\"\"\n\n    def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):\n        \"\"\"Init.\n\n        Args:\n            features (int): number of features\n        \"\"\"\n        super(FeatureFusionBlock_custom, self).__init__()\n\n        self.deconv = deconv\n        self.align_corners = align_corners\n\n        self.groups=1\n\n        self.expand = expand\n        out_features = features\n        if self.expand==True:\n            out_features = features//2\n        \n        self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)\n\n        self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)\n        self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)\n        \n        self.skip_add = nn.quantized.FloatFunctional()\n\n    def forward(self, *xs):\n        \"\"\"Forward pass.\n\n        Returns:\n            tensor: output\n        \"\"\"\n        output = xs[0]\n\n        if len(xs) == 2:\n            res = self.resConfUnit1(xs[1])\n            output = self.skip_add.add(output, res)\n            # output += res\n\n        output = self.resConfUnit2(output)\n\n        output = nn.functional.interpolate(\n            output, scale_factor=2, mode=\"bilinear\", align_corners=self.align_corners\n        )\n\n        output = self.out_conv(output)\n\n        return output\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/dpt_depth.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom .base_model import BaseModel\nfrom .blocks import (\n    FeatureFusionBlock,\n    FeatureFusionBlock_custom,\n    Interpolate,\n    _make_encoder,\n    forward_vit,\n)\n\n\ndef _make_fusion_block(features, use_bn):\n    return FeatureFusionBlock_custom(\n        features,\n        nn.ReLU(False),\n        deconv=False,\n        bn=use_bn,\n        expand=False,\n        align_corners=True,\n    )\n\n\nclass DPT(BaseModel):\n    def __init__(\n        self,\n        head,\n        features=256,\n        backbone=\"vitb_rn50_384\",\n        readout=\"project\",\n        channels_last=False,\n        use_bn=False,\n    ):\n\n        super(DPT, self).__init__()\n\n        self.channels_last = channels_last\n\n        hooks = {\n            \"vitb_rn50_384\": [0, 1, 8, 11],\n            \"vitb16_384\": [2, 5, 8, 11],\n            \"vitl16_384\": [5, 11, 17, 23],\n        }\n\n        # Instantiate backbone and reassemble blocks\n        self.pretrained, self.scratch = _make_encoder(\n            backbone,\n            features,\n            False, # Set to true of you want to train from scratch, uses ImageNet weights\n            groups=1,\n            expand=False,\n            exportable=False,\n            hooks=hooks[backbone],\n            use_readout=readout,\n        )\n\n        self.scratch.refinenet1 = _make_fusion_block(features, use_bn)\n        self.scratch.refinenet2 = _make_fusion_block(features, use_bn)\n        self.scratch.refinenet3 = _make_fusion_block(features, use_bn)\n        self.scratch.refinenet4 = _make_fusion_block(features, use_bn)\n\n        self.scratch.output_conv = head\n\n\n    def forward(self, x):\n        if self.channels_last == True:\n            x.contiguous(memory_format=torch.channels_last)\n\n        layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)\n\n        layer_1_rn = self.scratch.layer1_rn(layer_1)\n        layer_2_rn = self.scratch.layer2_rn(layer_2)\n        layer_3_rn = self.scratch.layer3_rn(layer_3)\n        layer_4_rn = self.scratch.layer4_rn(layer_4)\n\n        path_4 = self.scratch.refinenet4(layer_4_rn)\n        path_3 = self.scratch.refinenet3(path_4, layer_3_rn)\n        path_2 = self.scratch.refinenet2(path_3, layer_2_rn)\n        path_1 = self.scratch.refinenet1(path_2, layer_1_rn)\n\n        out = self.scratch.output_conv(path_1)\n\n        return out\n\n\nclass DPTDepthModel(DPT):\n    def __init__(self, path=None, non_negative=True, **kwargs):\n        features = kwargs[\"features\"] if \"features\" in kwargs else 256\n\n        head = nn.Sequential(\n            nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),\n            Interpolate(scale_factor=2, mode=\"bilinear\", align_corners=True),\n            nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),\n            nn.ReLU(True),\n            nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),\n            nn.ReLU(True) if non_negative else nn.Identity(),\n            nn.Identity(),\n        )\n\n        super().__init__(head, **kwargs)\n\n        if path is not None:\n           self.load(path)\n\n    def forward(self, x):\n        return super().forward(x).squeeze(dim=1)\n\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/midas_net.py",
    "content": "\"\"\"MidashNet: Network for monocular depth estimation trained by mixing several datasets.\nThis file contains code that is adapted from\nhttps://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py\n\"\"\"\nimport torch\nimport torch.nn as nn\n\nfrom .base_model import BaseModel\nfrom .blocks import FeatureFusionBlock, Interpolate, _make_encoder\n\n\nclass MidasNet(BaseModel):\n    \"\"\"Network for monocular depth estimation.\n    \"\"\"\n\n    def __init__(self, path=None, features=256, non_negative=True):\n        \"\"\"Init.\n\n        Args:\n            path (str, optional): Path to saved model. Defaults to None.\n            features (int, optional): Number of features. Defaults to 256.\n            backbone (str, optional): Backbone network for encoder. Defaults to resnet50\n        \"\"\"\n        print(\"Loading weights: \", path)\n\n        super(MidasNet, self).__init__()\n\n        use_pretrained = False if path is None else True\n\n        self.pretrained, self.scratch = _make_encoder(backbone=\"resnext101_wsl\", features=features, use_pretrained=use_pretrained)\n\n        self.scratch.refinenet4 = FeatureFusionBlock(features)\n        self.scratch.refinenet3 = FeatureFusionBlock(features)\n        self.scratch.refinenet2 = FeatureFusionBlock(features)\n        self.scratch.refinenet1 = FeatureFusionBlock(features)\n\n        self.scratch.output_conv = nn.Sequential(\n            nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),\n            Interpolate(scale_factor=2, mode=\"bilinear\"),\n            nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),\n            nn.ReLU(True),\n            nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),\n            nn.ReLU(True) if non_negative else nn.Identity(),\n        )\n\n        if path:\n            self.load(path)\n\n    def forward(self, x):\n        \"\"\"Forward pass.\n\n        Args:\n            x (tensor): input data (image)\n\n        Returns:\n            tensor: depth\n        \"\"\"\n\n        layer_1 = self.pretrained.layer1(x)\n        layer_2 = self.pretrained.layer2(layer_1)\n        layer_3 = self.pretrained.layer3(layer_2)\n        layer_4 = self.pretrained.layer4(layer_3)\n\n        layer_1_rn = self.scratch.layer1_rn(layer_1)\n        layer_2_rn = self.scratch.layer2_rn(layer_2)\n        layer_3_rn = self.scratch.layer3_rn(layer_3)\n        layer_4_rn = self.scratch.layer4_rn(layer_4)\n\n        path_4 = self.scratch.refinenet4(layer_4_rn)\n        path_3 = self.scratch.refinenet3(path_4, layer_3_rn)\n        path_2 = self.scratch.refinenet2(path_3, layer_2_rn)\n        path_1 = self.scratch.refinenet1(path_2, layer_1_rn)\n\n        out = self.scratch.output_conv(path_1)\n\n        return torch.squeeze(out, dim=1)\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/midas_net_custom.py",
    "content": "\"\"\"MidashNet: Network for monocular depth estimation trained by mixing several datasets.\nThis file contains code that is adapted from\nhttps://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py\n\"\"\"\nimport torch\nimport torch.nn as nn\n\nfrom .base_model import BaseModel\nfrom .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder\n\n\nclass MidasNet_small(BaseModel):\n    \"\"\"Network for monocular depth estimation.\n    \"\"\"\n\n    def __init__(self, path=None, features=64, backbone=\"efficientnet_lite3\", non_negative=True, exportable=True, channels_last=False, align_corners=True,\n        blocks={'expand': True}):\n        \"\"\"Init.\n\n        Args:\n            path (str, optional): Path to saved model. Defaults to None.\n            features (int, optional): Number of features. Defaults to 256.\n            backbone (str, optional): Backbone network for encoder. Defaults to resnet50\n        \"\"\"\n        print(\"Loading weights: \", path)\n\n        super(MidasNet_small, self).__init__()\n\n        use_pretrained = False if path else True\n                \n        self.channels_last = channels_last\n        self.blocks = blocks\n        self.backbone = backbone\n\n        self.groups = 1\n\n        features1=features\n        features2=features\n        features3=features\n        features4=features\n        self.expand = False\n        if \"expand\" in self.blocks and self.blocks['expand'] == True:\n            self.expand = True\n            features1=features\n            features2=features*2\n            features3=features*4\n            features4=features*8\n\n        self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable)\n  \n        self.scratch.activation = nn.ReLU(False)    \n\n        self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners)\n        self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners)\n        self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners)\n        self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners)\n\n        \n        self.scratch.output_conv = nn.Sequential(\n            nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups),\n            Interpolate(scale_factor=2, mode=\"bilinear\"),\n            nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1),\n            self.scratch.activation,\n            nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),\n            nn.ReLU(True) if non_negative else nn.Identity(),\n            nn.Identity(),\n        )\n        \n        if path:\n            self.load(path)\n\n\n    def forward(self, x):\n        \"\"\"Forward pass.\n\n        Args:\n            x (tensor): input data (image)\n\n        Returns:\n            tensor: depth\n        \"\"\"\n        if self.channels_last==True:\n            print(\"self.channels_last = \", self.channels_last)\n            x.contiguous(memory_format=torch.channels_last)\n\n\n        layer_1 = self.pretrained.layer1(x)\n        layer_2 = self.pretrained.layer2(layer_1)\n        layer_3 = self.pretrained.layer3(layer_2)\n        layer_4 = self.pretrained.layer4(layer_3)\n        \n        layer_1_rn = self.scratch.layer1_rn(layer_1)\n        layer_2_rn = self.scratch.layer2_rn(layer_2)\n        layer_3_rn = self.scratch.layer3_rn(layer_3)\n        layer_4_rn = self.scratch.layer4_rn(layer_4)\n\n\n        path_4 = self.scratch.refinenet4(layer_4_rn)\n        path_3 = self.scratch.refinenet3(path_4, layer_3_rn)\n        path_2 = self.scratch.refinenet2(path_3, layer_2_rn)\n        path_1 = self.scratch.refinenet1(path_2, layer_1_rn)\n        \n        out = self.scratch.output_conv(path_1)\n\n        return torch.squeeze(out, dim=1)\n\n\n\ndef fuse_model(m):\n    prev_previous_type = nn.Identity()\n    prev_previous_name = ''\n    previous_type = nn.Identity()\n    previous_name = ''\n    for name, module in m.named_modules():\n        if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU:\n            # print(\"FUSED \", prev_previous_name, previous_name, name)\n            torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True)\n        elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d:\n            # print(\"FUSED \", prev_previous_name, previous_name)\n            torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True)\n        # elif previous_type == nn.Conv2d and type(module) == nn.ReLU:\n        #    print(\"FUSED \", previous_name, name)\n        #    torch.quantization.fuse_modules(m, [previous_name, name], inplace=True)\n\n        prev_previous_type = previous_type\n        prev_previous_name = previous_name\n        previous_type = type(module)\n        previous_name = name"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/transforms.py",
    "content": "import numpy as np\nimport cv2\nimport math\n\n\ndef apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):\n    \"\"\"Rezise the sample to ensure the given size. Keeps aspect ratio.\n\n    Args:\n        sample (dict): sample\n        size (tuple): image size\n\n    Returns:\n        tuple: new size\n    \"\"\"\n    shape = list(sample[\"disparity\"].shape)\n\n    if shape[0] >= size[0] and shape[1] >= size[1]:\n        return sample\n\n    scale = [0, 0]\n    scale[0] = size[0] / shape[0]\n    scale[1] = size[1] / shape[1]\n\n    scale = max(scale)\n\n    shape[0] = math.ceil(scale * shape[0])\n    shape[1] = math.ceil(scale * shape[1])\n\n    # resize\n    sample[\"image\"] = cv2.resize(\n        sample[\"image\"], tuple(shape[::-1]), interpolation=image_interpolation_method\n    )\n\n    sample[\"disparity\"] = cv2.resize(\n        sample[\"disparity\"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST\n    )\n    sample[\"mask\"] = cv2.resize(\n        sample[\"mask\"].astype(np.float32),\n        tuple(shape[::-1]),\n        interpolation=cv2.INTER_NEAREST,\n    )\n    sample[\"mask\"] = sample[\"mask\"].astype(bool)\n\n    return tuple(shape)\n\n\nclass Resize(object):\n    \"\"\"Resize sample to given size (width, height).\n    \"\"\"\n\n    def __init__(\n        self,\n        width,\n        height,\n        resize_target=True,\n        keep_aspect_ratio=False,\n        ensure_multiple_of=1,\n        resize_method=\"lower_bound\",\n        image_interpolation_method=cv2.INTER_AREA,\n    ):\n        \"\"\"Init.\n\n        Args:\n            width (int): desired output width\n            height (int): desired output height\n            resize_target (bool, optional):\n                True: Resize the full sample (image, mask, target).\n                False: Resize image only.\n                Defaults to True.\n            keep_aspect_ratio (bool, optional):\n                True: Keep the aspect ratio of the input sample.\n                Output sample might not have the given width and height, and\n                resize behaviour depends on the parameter 'resize_method'.\n                Defaults to False.\n            ensure_multiple_of (int, optional):\n                Output width and height is constrained to be multiple of this parameter.\n                Defaults to 1.\n            resize_method (str, optional):\n                \"lower_bound\": Output will be at least as large as the given size.\n                \"upper_bound\": Output will be at max as large as the given size. (Output size might be smaller than given size.)\n                \"minimal\": Scale as least as possible.  (Output size might be smaller than given size.)\n                Defaults to \"lower_bound\".\n        \"\"\"\n        self.__width = width\n        self.__height = height\n\n        self.__resize_target = resize_target\n        self.__keep_aspect_ratio = keep_aspect_ratio\n        self.__multiple_of = ensure_multiple_of\n        self.__resize_method = resize_method\n        self.__image_interpolation_method = image_interpolation_method\n\n    def constrain_to_multiple_of(self, x, min_val=0, max_val=None):\n        y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)\n\n        if max_val is not None and y > max_val:\n            y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)\n\n        if y < min_val:\n            y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)\n\n        return y\n\n    def get_size(self, width, height):\n        # determine new height and width\n        scale_height = self.__height / height\n        scale_width = self.__width / width\n\n        if self.__keep_aspect_ratio:\n            if self.__resize_method == \"lower_bound\":\n                # scale such that output size is lower bound\n                if scale_width > scale_height:\n                    # fit width\n                    scale_height = scale_width\n                else:\n                    # fit height\n                    scale_width = scale_height\n            elif self.__resize_method == \"upper_bound\":\n                # scale such that output size is upper bound\n                if scale_width < scale_height:\n                    # fit width\n                    scale_height = scale_width\n                else:\n                    # fit height\n                    scale_width = scale_height\n            elif self.__resize_method == \"minimal\":\n                # scale as least as possbile\n                if abs(1 - scale_width) < abs(1 - scale_height):\n                    # fit width\n                    scale_height = scale_width\n                else:\n                    # fit height\n                    scale_width = scale_height\n            else:\n                raise ValueError(\n                    f\"resize_method {self.__resize_method} not implemented\"\n                )\n\n        if self.__resize_method == \"lower_bound\":\n            new_height = self.constrain_to_multiple_of(\n                scale_height * height, min_val=self.__height\n            )\n            new_width = self.constrain_to_multiple_of(\n                scale_width * width, min_val=self.__width\n            )\n        elif self.__resize_method == \"upper_bound\":\n            new_height = self.constrain_to_multiple_of(\n                scale_height * height, max_val=self.__height\n            )\n            new_width = self.constrain_to_multiple_of(\n                scale_width * width, max_val=self.__width\n            )\n        elif self.__resize_method == \"minimal\":\n            new_height = self.constrain_to_multiple_of(scale_height * height)\n            new_width = self.constrain_to_multiple_of(scale_width * width)\n        else:\n            raise ValueError(f\"resize_method {self.__resize_method} not implemented\")\n\n        return (new_width, new_height)\n\n    def __call__(self, sample):\n        width, height = self.get_size(\n            sample[\"image\"].shape[1], sample[\"image\"].shape[0]\n        )\n\n        # resize sample\n        sample[\"image\"] = cv2.resize(\n            sample[\"image\"],\n            (width, height),\n            interpolation=self.__image_interpolation_method,\n        )\n\n        if self.__resize_target:\n            if \"disparity\" in sample:\n                sample[\"disparity\"] = cv2.resize(\n                    sample[\"disparity\"],\n                    (width, height),\n                    interpolation=cv2.INTER_NEAREST,\n                )\n\n            if \"depth\" in sample:\n                sample[\"depth\"] = cv2.resize(\n                    sample[\"depth\"], (width, height), interpolation=cv2.INTER_NEAREST\n                )\n\n            sample[\"mask\"] = cv2.resize(\n                sample[\"mask\"].astype(np.float32),\n                (width, height),\n                interpolation=cv2.INTER_NEAREST,\n            )\n            sample[\"mask\"] = sample[\"mask\"].astype(bool)\n\n        return sample\n\n\nclass NormalizeImage(object):\n    \"\"\"Normlize image by given mean and std.\n    \"\"\"\n\n    def __init__(self, mean, std):\n        self.__mean = mean\n        self.__std = std\n\n    def __call__(self, sample):\n        sample[\"image\"] = (sample[\"image\"] - self.__mean) / self.__std\n\n        return sample\n\n\nclass PrepareForNet(object):\n    \"\"\"Prepare sample for usage as network input.\n    \"\"\"\n\n    def __init__(self):\n        pass\n\n    def __call__(self, sample):\n        image = np.transpose(sample[\"image\"], (2, 0, 1))\n        sample[\"image\"] = np.ascontiguousarray(image).astype(np.float32)\n\n        if \"mask\" in sample:\n            sample[\"mask\"] = sample[\"mask\"].astype(np.float32)\n            sample[\"mask\"] = np.ascontiguousarray(sample[\"mask\"])\n\n        if \"disparity\" in sample:\n            disparity = sample[\"disparity\"].astype(np.float32)\n            sample[\"disparity\"] = np.ascontiguousarray(disparity)\n\n        if \"depth\" in sample:\n            depth = sample[\"depth\"].astype(np.float32)\n            sample[\"depth\"] = np.ascontiguousarray(depth)\n\n        return sample\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/midas/vit.py",
    "content": "import torch\nimport torch.nn as nn\nimport timm\nimport types\nimport math\nimport torch.nn.functional as F\n\n\nclass Slice(nn.Module):\n    def __init__(self, start_index=1):\n        super(Slice, self).__init__()\n        self.start_index = start_index\n\n    def forward(self, x):\n        return x[:, self.start_index :]\n\n\nclass AddReadout(nn.Module):\n    def __init__(self, start_index=1):\n        super(AddReadout, self).__init__()\n        self.start_index = start_index\n\n    def forward(self, x):\n        if self.start_index == 2:\n            readout = (x[:, 0] + x[:, 1]) / 2\n        else:\n            readout = x[:, 0]\n        return x[:, self.start_index :] + readout.unsqueeze(1)\n\n\nclass ProjectReadout(nn.Module):\n    def __init__(self, in_features, start_index=1):\n        super(ProjectReadout, self).__init__()\n        self.start_index = start_index\n\n        self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU())\n\n    def forward(self, x):\n        readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :])\n        features = torch.cat((x[:, self.start_index :], readout), -1)\n\n        return self.project(features)\n\n\nclass Transpose(nn.Module):\n    def __init__(self, dim0, dim1):\n        super(Transpose, self).__init__()\n        self.dim0 = dim0\n        self.dim1 = dim1\n\n    def forward(self, x):\n        x = x.transpose(self.dim0, self.dim1)\n        return x\n\n\ndef forward_vit(pretrained, x):\n    b, c, h, w = x.shape\n\n    glob = pretrained.model.forward_flex(x)\n\n    layer_1 = pretrained.activations[\"1\"]\n    layer_2 = pretrained.activations[\"2\"]\n    layer_3 = pretrained.activations[\"3\"]\n    layer_4 = pretrained.activations[\"4\"]\n\n    layer_1 = pretrained.act_postprocess1[0:2](layer_1)\n    layer_2 = pretrained.act_postprocess2[0:2](layer_2)\n    layer_3 = pretrained.act_postprocess3[0:2](layer_3)\n    layer_4 = pretrained.act_postprocess4[0:2](layer_4)\n\n    unflatten = nn.Sequential(\n        nn.Unflatten(\n            2,\n            torch.Size(\n                [\n                    h // pretrained.model.patch_size[1],\n                    w // pretrained.model.patch_size[0],\n                ]\n            ),\n        )\n    )\n\n    if layer_1.ndim == 3:\n        layer_1 = unflatten(layer_1)\n    if layer_2.ndim == 3:\n        layer_2 = unflatten(layer_2)\n    if layer_3.ndim == 3:\n        layer_3 = unflatten(layer_3)\n    if layer_4.ndim == 3:\n        layer_4 = unflatten(layer_4)\n\n    layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1)\n    layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2)\n    layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3)\n    layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4)\n\n    return layer_1, layer_2, layer_3, layer_4\n\n\ndef _resize_pos_embed(self, posemb, gs_h, gs_w):\n    posemb_tok, posemb_grid = (\n        posemb[:, : self.start_index],\n        posemb[0, self.start_index :],\n    )\n\n    gs_old = int(math.sqrt(len(posemb_grid)))\n\n    posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)\n    posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode=\"bilinear\")\n    posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1)\n\n    posemb = torch.cat([posemb_tok, posemb_grid], dim=1)\n\n    return posemb\n\n\ndef forward_flex(self, x):\n    b, c, h, w = x.shape\n\n    pos_embed = self._resize_pos_embed(\n        self.pos_embed, h // self.patch_size[1], w // self.patch_size[0]\n    )\n\n    B = x.shape[0]\n\n    if hasattr(self.patch_embed, \"backbone\"):\n        x = self.patch_embed.backbone(x)\n        if isinstance(x, (list, tuple)):\n            x = x[-1]  # last feature if backbone outputs list/tuple of features\n\n    x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)\n\n    if getattr(self, \"dist_token\", None) is not None:\n        cls_tokens = self.cls_token.expand(\n            B, -1, -1\n        )  # stole cls_tokens impl from Phil Wang, thanks\n        dist_token = self.dist_token.expand(B, -1, -1)\n        x = torch.cat((cls_tokens, dist_token, x), dim=1)\n    else:\n        cls_tokens = self.cls_token.expand(\n            B, -1, -1\n        )  # stole cls_tokens impl from Phil Wang, thanks\n        x = torch.cat((cls_tokens, x), dim=1)\n\n    x = x + pos_embed\n    x = self.pos_drop(x)\n\n    for blk in self.blocks:\n        x = blk(x)\n\n    x = self.norm(x)\n\n    return x\n\n\nactivations = {}\n\n\ndef get_activation(name):\n    def hook(model, input, output):\n        activations[name] = output\n\n    return hook\n\n\ndef get_readout_oper(vit_features, features, use_readout, start_index=1):\n    if use_readout == \"ignore\":\n        readout_oper = [Slice(start_index)] * len(features)\n    elif use_readout == \"add\":\n        readout_oper = [AddReadout(start_index)] * len(features)\n    elif use_readout == \"project\":\n        readout_oper = [\n            ProjectReadout(vit_features, start_index) for out_feat in features\n        ]\n    else:\n        assert (\n            False\n        ), \"wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'\"\n\n    return readout_oper\n\n\ndef _make_vit_b16_backbone(\n    model,\n    features=[96, 192, 384, 768],\n    size=[384, 384],\n    hooks=[2, 5, 8, 11],\n    vit_features=768,\n    use_readout=\"ignore\",\n    start_index=1,\n):\n    pretrained = nn.Module()\n\n    pretrained.model = model\n    pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation(\"1\"))\n    pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation(\"2\"))\n    pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation(\"3\"))\n    pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation(\"4\"))\n\n    pretrained.activations = activations\n\n    readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)\n\n    # 32, 48, 136, 384\n    pretrained.act_postprocess1 = nn.Sequential(\n        readout_oper[0],\n        Transpose(1, 2),\n        nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n        nn.Conv2d(\n            in_channels=vit_features,\n            out_channels=features[0],\n            kernel_size=1,\n            stride=1,\n            padding=0,\n        ),\n        nn.ConvTranspose2d(\n            in_channels=features[0],\n            out_channels=features[0],\n            kernel_size=4,\n            stride=4,\n            padding=0,\n            bias=True,\n            dilation=1,\n            groups=1,\n        ),\n    )\n\n    pretrained.act_postprocess2 = nn.Sequential(\n        readout_oper[1],\n        Transpose(1, 2),\n        nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n        nn.Conv2d(\n            in_channels=vit_features,\n            out_channels=features[1],\n            kernel_size=1,\n            stride=1,\n            padding=0,\n        ),\n        nn.ConvTranspose2d(\n            in_channels=features[1],\n            out_channels=features[1],\n            kernel_size=2,\n            stride=2,\n            padding=0,\n            bias=True,\n            dilation=1,\n            groups=1,\n        ),\n    )\n\n    pretrained.act_postprocess3 = nn.Sequential(\n        readout_oper[2],\n        Transpose(1, 2),\n        nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n        nn.Conv2d(\n            in_channels=vit_features,\n            out_channels=features[2],\n            kernel_size=1,\n            stride=1,\n            padding=0,\n        ),\n    )\n\n    pretrained.act_postprocess4 = nn.Sequential(\n        readout_oper[3],\n        Transpose(1, 2),\n        nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n        nn.Conv2d(\n            in_channels=vit_features,\n            out_channels=features[3],\n            kernel_size=1,\n            stride=1,\n            padding=0,\n        ),\n        nn.Conv2d(\n            in_channels=features[3],\n            out_channels=features[3],\n            kernel_size=3,\n            stride=2,\n            padding=1,\n        ),\n    )\n\n    pretrained.model.start_index = start_index\n    pretrained.model.patch_size = [16, 16]\n\n    # We inject this function into the VisionTransformer instances so that\n    # we can use it with interpolated position embeddings without modifying the library source.\n    pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)\n    pretrained.model._resize_pos_embed = types.MethodType(\n        _resize_pos_embed, pretrained.model\n    )\n\n    return pretrained\n\n\ndef _make_pretrained_vitl16_384(pretrained, use_readout=\"ignore\", hooks=None):\n    model = timm.create_model(\"vit_large_patch16_384\", pretrained=pretrained)\n\n    hooks = [5, 11, 17, 23] if hooks == None else hooks\n    return _make_vit_b16_backbone(\n        model,\n        features=[256, 512, 1024, 1024],\n        hooks=hooks,\n        vit_features=1024,\n        use_readout=use_readout,\n    )\n\n\ndef _make_pretrained_vitb16_384(pretrained, use_readout=\"ignore\", hooks=None):\n    model = timm.create_model(\"vit_base_patch16_384\", pretrained=pretrained)\n\n    hooks = [2, 5, 8, 11] if hooks == None else hooks\n    return _make_vit_b16_backbone(\n        model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout\n    )\n\n\ndef _make_pretrained_deitb16_384(pretrained, use_readout=\"ignore\", hooks=None):\n    model = timm.create_model(\"vit_deit_base_patch16_384\", pretrained=pretrained)\n\n    hooks = [2, 5, 8, 11] if hooks == None else hooks\n    return _make_vit_b16_backbone(\n        model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout\n    )\n\n\ndef _make_pretrained_deitb16_distil_384(pretrained, use_readout=\"ignore\", hooks=None):\n    model = timm.create_model(\n        \"vit_deit_base_distilled_patch16_384\", pretrained=pretrained\n    )\n\n    hooks = [2, 5, 8, 11] if hooks == None else hooks\n    return _make_vit_b16_backbone(\n        model,\n        features=[96, 192, 384, 768],\n        hooks=hooks,\n        use_readout=use_readout,\n        start_index=2,\n    )\n\n\ndef _make_vit_b_rn50_backbone(\n    model,\n    features=[256, 512, 768, 768],\n    size=[384, 384],\n    hooks=[0, 1, 8, 11],\n    vit_features=768,\n    use_vit_only=False,\n    use_readout=\"ignore\",\n    start_index=1,\n):\n    pretrained = nn.Module()\n\n    pretrained.model = model\n\n    if use_vit_only == True:\n        pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation(\"1\"))\n        pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation(\"2\"))\n    else:\n        pretrained.model.patch_embed.backbone.stages[0].register_forward_hook(\n            get_activation(\"1\")\n        )\n        pretrained.model.patch_embed.backbone.stages[1].register_forward_hook(\n            get_activation(\"2\")\n        )\n\n    pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation(\"3\"))\n    pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation(\"4\"))\n\n    pretrained.activations = activations\n\n    readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)\n\n    if use_vit_only == True:\n        pretrained.act_postprocess1 = nn.Sequential(\n            readout_oper[0],\n            Transpose(1, 2),\n            nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n            nn.Conv2d(\n                in_channels=vit_features,\n                out_channels=features[0],\n                kernel_size=1,\n                stride=1,\n                padding=0,\n            ),\n            nn.ConvTranspose2d(\n                in_channels=features[0],\n                out_channels=features[0],\n                kernel_size=4,\n                stride=4,\n                padding=0,\n                bias=True,\n                dilation=1,\n                groups=1,\n            ),\n        )\n\n        pretrained.act_postprocess2 = nn.Sequential(\n            readout_oper[1],\n            Transpose(1, 2),\n            nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n            nn.Conv2d(\n                in_channels=vit_features,\n                out_channels=features[1],\n                kernel_size=1,\n                stride=1,\n                padding=0,\n            ),\n            nn.ConvTranspose2d(\n                in_channels=features[1],\n                out_channels=features[1],\n                kernel_size=2,\n                stride=2,\n                padding=0,\n                bias=True,\n                dilation=1,\n                groups=1,\n            ),\n        )\n    else:\n        pretrained.act_postprocess1 = nn.Sequential(\n            nn.Identity(), nn.Identity(), nn.Identity()\n        )\n        pretrained.act_postprocess2 = nn.Sequential(\n            nn.Identity(), nn.Identity(), nn.Identity()\n        )\n\n    pretrained.act_postprocess3 = nn.Sequential(\n        readout_oper[2],\n        Transpose(1, 2),\n        nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n        nn.Conv2d(\n            in_channels=vit_features,\n            out_channels=features[2],\n            kernel_size=1,\n            stride=1,\n            padding=0,\n        ),\n    )\n\n    pretrained.act_postprocess4 = nn.Sequential(\n        readout_oper[3],\n        Transpose(1, 2),\n        nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),\n        nn.Conv2d(\n            in_channels=vit_features,\n            out_channels=features[3],\n            kernel_size=1,\n            stride=1,\n            padding=0,\n        ),\n        nn.Conv2d(\n            in_channels=features[3],\n            out_channels=features[3],\n            kernel_size=3,\n            stride=2,\n            padding=1,\n        ),\n    )\n\n    pretrained.model.start_index = start_index\n    pretrained.model.patch_size = [16, 16]\n\n    # We inject this function into the VisionTransformer instances so that\n    # we can use it with interpolated position embeddings without modifying the library source.\n    pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)\n\n    # We inject this function into the VisionTransformer instances so that\n    # we can use it with interpolated position embeddings without modifying the library source.\n    pretrained.model._resize_pos_embed = types.MethodType(\n        _resize_pos_embed, pretrained.model\n    )\n\n    return pretrained\n\n\ndef _make_pretrained_vitb_rn50_384(\n    pretrained, use_readout=\"ignore\", hooks=None, use_vit_only=False\n):\n    model = timm.create_model(\"vit_base_resnet50_384\", pretrained=pretrained)\n\n    hooks = [0, 1, 8, 11] if hooks == None else hooks\n    return _make_vit_b_rn50_backbone(\n        model,\n        features=[256, 512, 768, 768],\n        size=[384, 384],\n        hooks=hooks,\n        use_vit_only=use_vit_only,\n        use_readout=use_readout,\n    )\n"
  },
  {
    "path": "ToonCrafter/ldm/modules/midas/utils.py",
    "content": "\"\"\"Utils for monoDepth.\"\"\"\nimport sys\nimport re\nimport numpy as np\nimport cv2\nimport torch\n\n\ndef read_pfm(path):\n    \"\"\"Read pfm file.\n\n    Args:\n        path (str): path to file\n\n    Returns:\n        tuple: (data, scale)\n    \"\"\"\n    with open(path, \"rb\") as file:\n\n        color = None\n        width = None\n        height = None\n        scale = None\n        endian = None\n\n        header = file.readline().rstrip()\n        if header.decode(\"ascii\") == \"PF\":\n            color = True\n        elif header.decode(\"ascii\") == \"Pf\":\n            color = False\n        else:\n            raise Exception(\"Not a PFM file: \" + path)\n\n        dim_match = re.match(r\"^(\\d+)\\s(\\d+)\\s$\", file.readline().decode(\"ascii\"))\n        if dim_match:\n            width, height = list(map(int, dim_match.groups()))\n        else:\n            raise Exception(\"Malformed PFM header.\")\n\n        scale = float(file.readline().decode(\"ascii\").rstrip())\n        if scale < 0:\n            # little-endian\n            endian = \"<\"\n            scale = -scale\n        else:\n            # big-endian\n            endian = \">\"\n\n        data = np.fromfile(file, endian + \"f\")\n        shape = (height, width, 3) if color else (height, width)\n\n        data = np.reshape(data, shape)\n        data = np.flipud(data)\n\n        return data, scale\n\n\ndef write_pfm(path, image, scale=1):\n    \"\"\"Write pfm file.\n\n    Args:\n        path (str): pathto file\n        image (array): data\n        scale (int, optional): Scale. Defaults to 1.\n    \"\"\"\n\n    with open(path, \"wb\") as file:\n        color = None\n\n        if image.dtype.name != \"float32\":\n            raise Exception(\"Image dtype must be float32.\")\n\n        image = np.flipud(image)\n\n        if len(image.shape) == 3 and image.shape[2] == 3:  # color image\n            color = True\n        elif (\n            len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1\n        ):  # greyscale\n            color = False\n        else:\n            raise Exception(\"Image must have H x W x 3, H x W x 1 or H x W dimensions.\")\n\n        file.write(\"PF\\n\" if color else \"Pf\\n\".encode())\n        file.write(\"%d %d\\n\".encode() % (image.shape[1], image.shape[0]))\n\n        endian = image.dtype.byteorder\n\n        if endian == \"<\" or endian == \"=\" and sys.byteorder == \"little\":\n            scale = -scale\n\n        file.write(\"%f\\n\".encode() % scale)\n\n        image.tofile(file)\n\n\ndef read_image(path):\n    \"\"\"Read image and output RGB image (0-1).\n\n    Args:\n        path (str): path to file\n\n    Returns:\n        array: RGB image (0-1)\n    \"\"\"\n    img = cv2.imread(path)\n\n    if img.ndim == 2:\n        img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)\n\n    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0\n\n    return img\n\n\ndef resize_image(img):\n    \"\"\"Resize image and make it fit for network.\n\n    Args:\n        img (array): image\n\n    Returns:\n        tensor: data ready for network\n    \"\"\"\n    height_orig = img.shape[0]\n    width_orig = img.shape[1]\n\n    if width_orig > height_orig:\n        scale = width_orig / 384\n    else:\n        scale = height_orig / 384\n\n    height = (np.ceil(height_orig / scale / 32) * 32).astype(int)\n    width = (np.ceil(width_orig / scale / 32) * 32).astype(int)\n\n    img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA)\n\n    img_resized = (\n        torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float()\n    )\n    img_resized = img_resized.unsqueeze(0)\n\n    return img_resized\n\n\ndef resize_depth(depth, width, height):\n    \"\"\"Resize depth map and bring to CPU (numpy).\n\n    Args:\n        depth (tensor): depth\n        width (int): image width\n        height (int): image height\n\n    Returns:\n        array: processed depth\n    \"\"\"\n    depth = torch.squeeze(depth[0, :, :, :]).to(\"cpu\")\n\n    depth_resized = cv2.resize(\n        depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC\n    )\n\n    return depth_resized\n\ndef write_depth(path, depth, bits=1):\n    \"\"\"Write depth map to pfm and png file.\n\n    Args:\n        path (str): filepath without extension\n        depth (array): depth\n    \"\"\"\n    write_pfm(path + \".pfm\", depth.astype(np.float32))\n\n    depth_min = depth.min()\n    depth_max = depth.max()\n\n    max_val = (2**(8*bits))-1\n\n    if depth_max - depth_min > np.finfo(\"float\").eps:\n        out = max_val * (depth - depth_min) / (depth_max - depth_min)\n    else:\n        out = np.zeros(depth.shape, dtype=depth.type)\n\n    if bits == 1:\n        cv2.imwrite(path + \".png\", out.astype(\"uint8\"))\n    elif bits == 2:\n        cv2.imwrite(path + \".png\", out.astype(\"uint16\"))\n\n    return\n"
  },
  {
    "path": "ToonCrafter/ldm/util.py",
    "content": "import importlib\n\nimport torch\nfrom torch import optim\nimport numpy as np\n\nfrom inspect import isfunction\nfrom PIL import Image, ImageDraw, ImageFont\n\n\ndef log_txt_as_img(wh, xc, size=10):\n    # wh a tuple of (width, height)\n    # xc a list of captions to plot\n    b = len(xc)\n    txts = list()\n    for bi in range(b):\n        txt = Image.new(\"RGB\", wh, color=\"white\")\n        draw = ImageDraw.Draw(txt)\n        font = ImageFont.truetype('font/DejaVuSans.ttf', size=size)\n        nc = int(40 * (wh[0] / 256))\n        lines = \"\\n\".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc))\n\n        try:\n            draw.text((0, 0), lines, fill=\"black\", font=font)\n        except UnicodeEncodeError:\n            print(\"Cant encode string for logging. Skipping.\")\n\n        txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0\n        txts.append(txt)\n    txts = np.stack(txts)\n    txts = torch.tensor(txts)\n    return txts\n\n\ndef ismap(x):\n    if not isinstance(x, torch.Tensor):\n        return False\n    return (len(x.shape) == 4) and (x.shape[1] > 3)\n\n\ndef isimage(x):\n    if not isinstance(x,torch.Tensor):\n        return False\n    return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)\n\n\ndef exists(x):\n    return x is not None\n\n\ndef default(val, d):\n    if exists(val):\n        return val\n    return d() if isfunction(d) else d\n\n\ndef mean_flat(tensor):\n    \"\"\"\n    https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86\n    Take the mean over all non-batch dimensions.\n    \"\"\"\n    return tensor.mean(dim=list(range(1, len(tensor.shape))))\n\n\ndef count_params(model, verbose=False):\n    total_params = sum(p.numel() for p in model.parameters())\n    if verbose:\n        print(f\"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.\")\n    return total_params\n\n\ndef instantiate_from_config(config):\n    if not \"target\" in config:\n        if config == '__is_first_stage__':\n            return None\n        elif config == \"__is_unconditional__\":\n            return None\n        raise KeyError(\"Expected key `target` to instantiate.\")\n    return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))\n\n\ndef get_obj_from_str(string, reload=False):\n    module, cls = string.rsplit(\".\", 1)\n    if reload:\n        module_imp = importlib.import_module(module)\n        importlib.reload(module_imp)\n    return getattr(importlib.import_module(module, package=None), cls)\n\n\nclass AdamWwithEMAandWings(optim.Optimizer):\n    # credit to https://gist.github.com/crowsonkb/65f7265353f403714fce3b2595e0b298\n    def __init__(self, params, lr=1.e-3, betas=(0.9, 0.999), eps=1.e-8,  # TODO: check hyperparameters before using\n                 weight_decay=1.e-2, amsgrad=False, ema_decay=0.9999,   # ema decay to match previous code\n                 ema_power=1., param_names=()):\n        \"\"\"AdamW that saves EMA versions of the parameters.\"\"\"\n        if not 0.0 <= lr:\n            raise ValueError(\"Invalid learning rate: {}\".format(lr))\n        if not 0.0 <= eps:\n            raise ValueError(\"Invalid epsilon value: {}\".format(eps))\n        if not 0.0 <= betas[0] < 1.0:\n            raise ValueError(\"Invalid beta parameter at index 0: {}\".format(betas[0]))\n        if not 0.0 <= betas[1] < 1.0:\n            raise ValueError(\"Invalid beta parameter at index 1: {}\".format(betas[1]))\n        if not 0.0 <= weight_decay:\n            raise ValueError(\"Invalid weight_decay value: {}\".format(weight_decay))\n        if not 0.0 <= ema_decay <= 1.0:\n            raise ValueError(\"Invalid ema_decay value: {}\".format(ema_decay))\n        defaults = dict(lr=lr, betas=betas, eps=eps,\n                        weight_decay=weight_decay, amsgrad=amsgrad, ema_decay=ema_decay,\n                        ema_power=ema_power, param_names=param_names)\n        super().__init__(params, defaults)\n\n    def __setstate__(self, state):\n        super().__setstate__(state)\n        for group in self.param_groups:\n            group.setdefault('amsgrad', False)\n\n    @torch.no_grad()\n    def step(self, closure=None):\n        \"\"\"Performs a single optimization step.\n        Args:\n            closure (callable, optional): A closure that reevaluates the model\n                and returns the loss.\n        \"\"\"\n        loss = None\n        if closure is not None:\n            with torch.enable_grad():\n                loss = closure()\n\n        for group in self.param_groups:\n            params_with_grad = []\n            grads = []\n            exp_avgs = []\n            exp_avg_sqs = []\n            ema_params_with_grad = []\n            state_sums = []\n            max_exp_avg_sqs = []\n            state_steps = []\n            amsgrad = group['amsgrad']\n            beta1, beta2 = group['betas']\n            ema_decay = group['ema_decay']\n            ema_power = group['ema_power']\n\n            for p in group['params']:\n                if p.grad is None:\n                    continue\n                params_with_grad.append(p)\n                if p.grad.is_sparse:\n                    raise RuntimeError('AdamW does not support sparse gradients')\n                grads.append(p.grad)\n\n                state = self.state[p]\n\n                # State initialization\n                if len(state) == 0:\n                    state['step'] = 0\n                    # Exponential moving average of gradient values\n                    state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)\n                    # Exponential moving average of squared gradient values\n                    state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)\n                    if amsgrad:\n                        # Maintains max of all exp. moving avg. of sq. grad. values\n                        state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)\n                    # Exponential moving average of parameter values\n                    state['param_exp_avg'] = p.detach().float().clone()\n\n                exp_avgs.append(state['exp_avg'])\n                exp_avg_sqs.append(state['exp_avg_sq'])\n                ema_params_with_grad.append(state['param_exp_avg'])\n\n                if amsgrad:\n                    max_exp_avg_sqs.append(state['max_exp_avg_sq'])\n\n                # update the steps for each param group update\n                state['step'] += 1\n                # record the step after step update\n                state_steps.append(state['step'])\n\n            optim._functional.adamw(params_with_grad,\n                    grads,\n                    exp_avgs,\n                    exp_avg_sqs,\n                    max_exp_avg_sqs,\n                    state_steps,\n                    amsgrad=amsgrad,\n                    beta1=beta1,\n                    beta2=beta2,\n                    lr=group['lr'],\n                    weight_decay=group['weight_decay'],\n                    eps=group['eps'],\n                    maximize=False)\n\n            cur_ema_decay = min(ema_decay, 1 - state['step'] ** -ema_power)\n            for param, ema_param in zip(params_with_grad, ema_params_with_grad):\n                ema_param.mul_(cur_ema_decay).add_(param.float(), alpha=1 - cur_ema_decay)\n\n        return loss"
  },
  {
    "path": "ToonCrafter/lvdm/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/lvdm/basics.py",
    "content": "# adopted from\n# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py\n# and\n# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py\n# and\n# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py\n#\n# thanks!\n\nimport torch.nn as nn\nfrom ToonCrafter.utils.utils import instantiate_from_config\n\n\ndef disabled_train(self, mode=True):\n    \"\"\"Overwrite model.train with this function to make sure train/eval mode\n    does not change anymore.\"\"\"\n    return self\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef scale_module(module, scale):\n    \"\"\"\n    Scale the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().mul_(scale)\n    return module\n\n\ndef conv_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D convolution module.\n    \"\"\"\n    if dims == 1:\n        return nn.Conv1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.Conv2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.Conv3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef linear(*args, **kwargs):\n    \"\"\"\n    Create a linear module.\n    \"\"\"\n    return nn.Linear(*args, **kwargs)\n\n\ndef avg_pool_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D average pooling module.\n    \"\"\"\n    if dims == 1:\n        return nn.AvgPool1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.AvgPool2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.AvgPool3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef nonlinearity(type='silu'):\n    if type == 'silu':\n        return nn.SiLU()\n    elif type == 'leaky_relu':\n        return nn.LeakyReLU()\n\n\nclass GroupNormSpecific(nn.GroupNorm):\n    def forward(self, x):\n        return super().forward(x.float()).type(x.dtype)\n\n\ndef normalization(channels, num_groups=32):\n    \"\"\"\n    Make a standard normalization layer.\n    :param channels: number of input channels.\n    :return: an nn.Module for normalization.\n    \"\"\"\n    return GroupNormSpecific(num_groups, channels)\n\n\nclass HybridConditioner(nn.Module):\n\n    def __init__(self, c_concat_config, c_crossattn_config):\n        super().__init__()\n        self.concat_conditioner = instantiate_from_config(c_concat_config)\n        self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)\n\n    def forward(self, c_concat, c_crossattn):\n        c_concat = self.concat_conditioner(c_concat)\n        c_crossattn = self.crossattn_conditioner(c_crossattn)\n        return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}\n"
  },
  {
    "path": "ToonCrafter/lvdm/common.py",
    "content": "import math\nfrom inspect import isfunction\nimport torch\nfrom torch import nn\nimport torch.distributed as dist\n\n\ndef gather_data(data, return_np=True):\n    ''' gather data from multiple processes to one list '''\n    data_list = [torch.zeros_like(data) for _ in range(dist.get_world_size())]\n    dist.all_gather(data_list, data)  # gather not supported with NCCL\n    if return_np:\n        data_list = [data.cpu().numpy() for data in data_list]\n    return data_list\n\ndef autocast(f):\n    def do_autocast(*args, **kwargs):\n        with torch.cuda.amp.autocast(enabled=True,\n                                     dtype=torch.get_autocast_gpu_dtype(),\n                                     cache_enabled=torch.is_autocast_cache_enabled()):\n            return f(*args, **kwargs)\n    return do_autocast\n\n\ndef extract_into_tensor(a, t, x_shape):\n    b, *_ = t.shape\n    out = a.gather(-1, t)\n    return out.reshape(b, *((1,) * (len(x_shape) - 1)))\n\n\ndef noise_like(shape, device, repeat=False):\n    repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))\n    noise = lambda: torch.randn(shape, device=device)\n    return repeat_noise() if repeat else noise()\n\n\ndef default(val, d):\n    if exists(val):\n        return val\n    return d() if isfunction(d) else d\n\ndef exists(val):\n    return val is not None\n\ndef identity(*args, **kwargs):\n    return nn.Identity()\n\ndef uniq(arr):\n    return{el: True for el in arr}.keys()\n\ndef mean_flat(tensor):\n    \"\"\"\n    Take the mean over all non-batch dimensions.\n    \"\"\"\n    return tensor.mean(dim=list(range(1, len(tensor.shape))))\n\ndef ismap(x):\n    if not isinstance(x, torch.Tensor):\n        return False\n    return (len(x.shape) == 4) and (x.shape[1] > 3)\n\ndef isimage(x):\n    if not isinstance(x,torch.Tensor):\n        return False\n    return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)\n\ndef max_neg_value(t):\n    return -torch.finfo(t.dtype).max\n\ndef shape_to_str(x):\n    shape_str = \"x\".join([str(x) for x in x.shape])\n    return shape_str\n\ndef init_(tensor):\n    dim = tensor.shape[-1]\n    std = 1 / math.sqrt(dim)\n    tensor.uniform_(-std, std)\n    return tensor\n\nckpt = torch.utils.checkpoint.checkpoint\ndef checkpoint(func, inputs, params, flag):\n    \"\"\"\n    Evaluate a function without caching intermediate activations, allowing for\n    reduced memory at the expense of extra compute in the backward pass.\n    :param func: the function to evaluate.\n    :param inputs: the argument sequence to pass to `func`.\n    :param params: a sequence of parameters `func` depends on but does not\n                   explicitly take as arguments.\n    :param flag: if False, disable gradient checkpointing.\n    \"\"\"\n    if flag:\n        return ckpt(func, *inputs, use_reentrant=False)\n    else:\n        return func(*inputs)"
  },
  {
    "path": "ToonCrafter/lvdm/data/base.py",
    "content": "from abc import abstractmethod\nfrom torch.utils.data import IterableDataset\n\n\nclass Txt2ImgIterableBaseDataset(IterableDataset):\n    '''\n    Define an interface to make the IterableDatasets for text2img data chainable\n    '''\n    def __init__(self, num_records=0, valid_ids=None, size=256):\n        super().__init__()\n        self.num_records = num_records\n        self.valid_ids = valid_ids\n        self.sample_ids = valid_ids\n        self.size = size\n\n        print(f'{self.__class__.__name__} dataset contains {self.__len__()} examples.')\n\n    def __len__(self):\n        return self.num_records\n\n    @abstractmethod\n    def __iter__(self):\n        pass"
  },
  {
    "path": "ToonCrafter/lvdm/data/webvid.py",
    "content": "import os\nimport random\nfrom tqdm import tqdm\nimport pandas as pd\nfrom decord import VideoReader, cpu\n\nimport torch\nfrom torch.utils.data import Dataset\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\n\n\nclass WebVid(Dataset):\n    \"\"\"\n    WebVid Dataset.\n    Assumes webvid data is structured as follows.\n    Webvid/\n        videos/\n            000001_000050/      ($page_dir)\n                1.mp4           (videoid.mp4)\n                ...\n                5000.mp4\n            ...\n    \"\"\"\n    def __init__(self,\n                 meta_path,\n                 data_dir,\n                 subsample=None,\n                 video_length=16,\n                 resolution=[256, 512],\n                 frame_stride=1,\n                 frame_stride_min=1,\n                 spatial_transform=None,\n                 crop_resolution=None,\n                 fps_max=None,\n                 load_raw_resolution=False,\n                 fixed_fps=None,\n                 random_fs=False,\n                 ):\n        self.meta_path = meta_path\n        self.data_dir = data_dir\n        self.subsample = subsample\n        self.video_length = video_length\n        self.resolution = [resolution, resolution] if isinstance(resolution, int) else resolution\n        self.fps_max = fps_max\n        self.frame_stride = frame_stride\n        self.frame_stride_min = frame_stride_min\n        self.fixed_fps = fixed_fps\n        self.load_raw_resolution = load_raw_resolution\n        self.random_fs = random_fs\n        self._load_metadata()\n        if spatial_transform is not None:\n            if spatial_transform == \"random_crop\":\n                self.spatial_transform = transforms.RandomCrop(crop_resolution)\n            elif spatial_transform == \"center_crop\":\n                self.spatial_transform = transforms.Compose([\n                    transforms.CenterCrop(resolution),\n                    ])            \n            elif spatial_transform == \"resize_center_crop\":\n                # assert(self.resolution[0] == self.resolution[1])\n                self.spatial_transform = transforms.Compose([\n                    transforms.Resize(min(self.resolution)),\n                    transforms.CenterCrop(self.resolution),\n                    ])\n            elif spatial_transform == \"resize\":\n                self.spatial_transform = transforms.Resize(self.resolution)\n            else:\n                raise NotImplementedError\n        else:\n            self.spatial_transform = None\n                \n    def _load_metadata(self):\n        metadata = pd.read_csv(self.meta_path)\n        print(f'>>> {len(metadata)} data samples loaded.')\n        if self.subsample is not None:\n            metadata = metadata.sample(self.subsample, random_state=0)\n   \n        metadata['caption'] = metadata['name']\n        del metadata['name']\n        self.metadata = metadata\n        self.metadata.dropna(inplace=True)\n\n    def _get_video_path(self, sample):\n        rel_video_fp = os.path.join(sample['page_dir'], str(sample['videoid']) + '.mp4')\n        full_video_fp = os.path.join(self.data_dir, 'videos', rel_video_fp)\n        return full_video_fp\n    \n    def __getitem__(self, index):\n        if self.random_fs:\n            frame_stride = random.randint(self.frame_stride_min, self.frame_stride)\n        else:\n            frame_stride = self.frame_stride\n\n        ## get frames until success\n        while True:\n            index = index % len(self.metadata)\n            sample = self.metadata.iloc[index]\n            video_path = self._get_video_path(sample)\n            ## video_path should be in the format of \"....../WebVid/videos/$page_dir/$videoid.mp4\"\n            caption = sample['caption']\n\n            try:\n                if self.load_raw_resolution:\n                    video_reader = VideoReader(video_path, ctx=cpu(0))\n                else:\n                    video_reader = VideoReader(video_path, ctx=cpu(0), width=530, height=300)\n                if len(video_reader) < self.video_length:\n                    print(f\"video length ({len(video_reader)}) is smaller than target length({self.video_length})\")\n                    index += 1\n                    continue\n                else:\n                    pass\n            except:\n                index += 1\n                print(f\"Load video failed! path = {video_path}\")\n                continue\n            \n            fps_ori = video_reader.get_avg_fps()\n            if self.fixed_fps is not None:\n                frame_stride = int(frame_stride * (1.0 * fps_ori / self.fixed_fps))\n\n            ## to avoid extreme cases when fixed_fps is used\n            frame_stride = max(frame_stride, 1)\n            \n            ## get valid range (adapting case by case)\n            required_frame_num = frame_stride * (self.video_length-1) + 1\n            frame_num = len(video_reader)\n            if frame_num < required_frame_num:\n                ## drop extra samples if fixed fps is required\n                if self.fixed_fps is not None and frame_num < required_frame_num * 0.5:\n                    index += 1\n                    continue\n                else:\n                    frame_stride = frame_num // self.video_length\n                    required_frame_num = frame_stride * (self.video_length-1) + 1\n\n            ## select a random clip\n            random_range = frame_num - required_frame_num\n            start_idx = random.randint(0, random_range) if random_range > 0 else 0\n\n            ## calculate frame indices\n            frame_indices = [start_idx + frame_stride*i for i in range(self.video_length)]\n            try:\n                frames = video_reader.get_batch(frame_indices)\n                break\n            except:\n                print(f\"Get frames failed! path = {video_path}; [max_ind vs frame_total:{max(frame_indices)} / {frame_num}]\")\n                index += 1\n                continue\n        \n        ## process data\n        assert(frames.shape[0] == self.video_length),f'{len(frames)}, self.video_length={self.video_length}'\n        frames = torch.tensor(frames.asnumpy()).permute(3, 0, 1, 2).float() # [t,h,w,c] -> [c,t,h,w]\n        \n        if self.spatial_transform is not None:\n            frames = self.spatial_transform(frames)\n        \n        if self.resolution is not None:\n            assert (frames.shape[2], frames.shape[3]) == (self.resolution[0], self.resolution[1]), f'frames={frames.shape}, self.resolution={self.resolution}'\n        \n        ## turn frames tensors to [-1,1]\n        frames = (frames / 255 - 0.5) * 2\n        fps_clip = fps_ori // frame_stride\n        if self.fps_max is not None and fps_clip > self.fps_max:\n            fps_clip = self.fps_max\n\n        data = {'video': frames, 'caption': caption, 'path': video_path, 'fps': fps_clip, 'frame_stride': frame_stride}\n        return data\n    \n    def __len__(self):\n        return len(self.metadata)\n\n\nif __name__== \"__main__\":\n    meta_path = \"\" ## path to the meta file\n    data_dir = \"\" ## path to the data directory\n    save_dir = \"\" ## path to the save directory\n    dataset = WebVid(meta_path,\n                 data_dir,\n                 subsample=None,\n                 video_length=16,\n                 resolution=[256,448],\n                 frame_stride=4,\n                 spatial_transform=\"resize_center_crop\",\n                 crop_resolution=None,\n                 fps_max=None,\n                 load_raw_resolution=True\n                 )\n    dataloader = DataLoader(dataset,\n                    batch_size=1,\n                    num_workers=0,\n                    shuffle=False)\n\n    \n    import sys\n    sys.path.insert(1, os.path.join(sys.path[0], '..', '..'))\n    from utils.save_video import tensor_to_mp4\n    for i, batch in tqdm(enumerate(dataloader), desc=\"Data Batch\"):\n        video = batch['video']\n        name = batch['path'][0].split('videos/')[-1].replace('/','_')\n        tensor_to_mp4(video, save_dir+'/'+name, fps=8)\n\n"
  },
  {
    "path": "ToonCrafter/lvdm/distributions.py",
    "content": "import torch\nimport numpy as np\n\n\nclass AbstractDistribution:\n    def sample(self):\n        raise NotImplementedError()\n\n    def mode(self):\n        raise NotImplementedError()\n\n\nclass DiracDistribution(AbstractDistribution):\n    def __init__(self, value):\n        self.value = value\n\n    def sample(self):\n        return self.value\n\n    def mode(self):\n        return self.value\n\n\nclass DiagonalGaussianDistribution(object):\n    def __init__(self, parameters, deterministic=False):\n        self.parameters = parameters\n        self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)\n        self.logvar = torch.clamp(self.logvar, -30.0, 20.0)\n        self.deterministic = deterministic\n        self.std = torch.exp(0.5 * self.logvar)\n        self.var = torch.exp(self.logvar)\n        if self.deterministic:\n            self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)\n\n    def sample(self, noise=None):\n        if noise is None:\n            noise = torch.randn(self.mean.shape)\n        \n        x = self.mean + self.std * noise.to(device=self.parameters.device)\n        return x\n\n    def kl(self, other=None):\n        if self.deterministic:\n            return torch.Tensor([0.])\n        else:\n            if other is None:\n                return 0.5 * torch.sum(torch.pow(self.mean, 2)\n                                       + self.var - 1.0 - self.logvar,\n                                       dim=[1, 2, 3])\n            else:\n                return 0.5 * torch.sum(\n                    torch.pow(self.mean - other.mean, 2) / other.var\n                    + self.var / other.var - 1.0 - self.logvar + other.logvar,\n                    dim=[1, 2, 3])\n\n    def nll(self, sample, dims=[1,2,3]):\n        if self.deterministic:\n            return torch.Tensor([0.])\n        logtwopi = np.log(2.0 * np.pi)\n        return 0.5 * torch.sum(\n            logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,\n            dim=dims)\n\n    def mode(self):\n        return self.mean\n\n\ndef normal_kl(mean1, logvar1, mean2, logvar2):\n    \"\"\"\n    source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12\n    Compute the KL divergence between two gaussians.\n    Shapes are automatically broadcasted, so batches can be compared to\n    scalars, among other use cases.\n    \"\"\"\n    tensor = None\n    for obj in (mean1, logvar1, mean2, logvar2):\n        if isinstance(obj, torch.Tensor):\n            tensor = obj\n            break\n    assert tensor is not None, \"at least one argument must be a Tensor\"\n\n    # Force variances to be Tensors. Broadcasting helps convert scalars to\n    # Tensors, but it does not work for torch.exp().\n    logvar1, logvar2 = [\n        x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)\n        for x in (logvar1, logvar2)\n    ]\n\n    return 0.5 * (\n        -1.0\n        + logvar2\n        - logvar1\n        + torch.exp(logvar1 - logvar2)\n        + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)\n    )"
  },
  {
    "path": "ToonCrafter/lvdm/ema.py",
    "content": "import torch\nfrom torch import nn\n\n\nclass LitEma(nn.Module):\n    def __init__(self, model, decay=0.9999, use_num_upates=True):\n        super().__init__()\n        if decay < 0.0 or decay > 1.0:\n            raise ValueError('Decay must be between 0 and 1')\n\n        self.m_name2s_name = {}\n        self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))\n        self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates\n                             else torch.tensor(-1,dtype=torch.int))\n\n        for name, p in model.named_parameters():\n            if p.requires_grad:\n                #remove as '.'-character is not allowed in buffers\n                s_name = name.replace('.','')\n                self.m_name2s_name.update({name:s_name})\n                self.register_buffer(s_name,p.clone().detach().data)\n\n        self.collected_params = []\n\n    def forward(self,model):\n        decay = self.decay\n\n        if self.num_updates >= 0:\n            self.num_updates += 1\n            decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))\n\n        one_minus_decay = 1.0 - decay\n\n        with torch.no_grad():\n            m_param = dict(model.named_parameters())\n            shadow_params = dict(self.named_buffers())\n\n            for key in m_param:\n                if m_param[key].requires_grad:\n                    sname = self.m_name2s_name[key]\n                    shadow_params[sname] = shadow_params[sname].type_as(m_param[key])\n                    shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))\n                else:\n                    assert not key in self.m_name2s_name\n\n    def copy_to(self, model):\n        m_param = dict(model.named_parameters())\n        shadow_params = dict(self.named_buffers())\n        for key in m_param:\n            if m_param[key].requires_grad:\n                m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)\n            else:\n                assert not key in self.m_name2s_name\n\n    def store(self, parameters):\n        \"\"\"\n        Save the current parameters for restoring later.\n        Args:\n          parameters: Iterable of `torch.nn.Parameter`; the parameters to be\n            temporarily stored.\n        \"\"\"\n        self.collected_params = [param.clone() for param in parameters]\n\n    def restore(self, parameters):\n        \"\"\"\n        Restore the parameters stored with the `store` method.\n        Useful to validate the model with EMA parameters without affecting the\n        original optimization process. Store the parameters before the\n        `copy_to` method. After validation (or model saving), use this to\n        restore the former parameters.\n        Args:\n          parameters: Iterable of `torch.nn.Parameter`; the parameters to be\n            updated with the stored parameters.\n        \"\"\"\n        for c_param, param in zip(self.collected_params, parameters):\n            param.data.copy_(c_param.data)"
  },
  {
    "path": "ToonCrafter/lvdm/models/autoencoder.py",
    "content": "import os\nfrom contextlib import contextmanager\nimport torch\nimport numpy as np\nfrom einops import rearrange\nimport torch.nn.functional as F\nimport pytorch_lightning as pl\nfrom lvdm.modules.networks.ae_modules import Encoder, Decoder\nfrom lvdm.distributions import DiagonalGaussianDistribution\nfrom ToonCrafter.utils.utils import instantiate_from_config\n\nTIMESTEPS = 16\n\n\nclass AutoencoderKL(pl.LightningModule):\n    def __init__(self,\n                 ddconfig,\n                 lossconfig,\n                 embed_dim,\n                 ckpt_path=None,\n                 ignore_keys=[],\n                 image_key=\"image\",\n                 colorize_nlabels=None,\n                 monitor=None,\n                 test=False,\n                 logdir=None,\n                 input_dim=4,\n                 test_args=None,\n                 additional_decode_keys=None,\n                 use_checkpoint=False,\n                 diff_boost_factor=3.0,\n                 ):\n        super().__init__()\n        self.image_key = image_key\n        self.encoder = Encoder(**ddconfig)\n        self.decoder = Decoder(**ddconfig)\n        self.loss = instantiate_from_config(lossconfig)\n        assert ddconfig[\"double_z\"]\n        self.quant_conv = torch.nn.Conv2d(2 * ddconfig[\"z_channels\"], 2 * embed_dim, 1)\n        self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig[\"z_channels\"], 1)\n        self.embed_dim = embed_dim\n        self.input_dim = input_dim\n        self.test = test\n        self.test_args = test_args\n        self.logdir = logdir\n        if colorize_nlabels is not None:\n            assert isinstance(colorize_nlabels, int)\n            self.register_buffer(\"colorize\", torch.randn(3, colorize_nlabels, 1, 1))\n        if monitor is not None:\n            self.monitor = monitor\n        if ckpt_path is not None:\n            self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)\n        if self.test:\n            self.init_test()\n        if torch.backends.mps.is_available():\n            self._device = torch.device(\"mps\")\n\n    def init_test(self,):\n        self.test = True\n        save_dir = os.path.join(self.logdir, \"test\")\n        if 'ckpt' in self.test_args:\n            ckpt_name = os.path.basename(self.test_args.ckpt).split('.ckpt')[0] + f'_epoch{self._cur_epoch}'\n            self.root = os.path.join(save_dir, ckpt_name)\n        else:\n            self.root = save_dir\n        if 'test_subdir' in self.test_args:\n            self.root = os.path.join(save_dir, self.test_args.test_subdir)\n\n        self.root_zs = os.path.join(self.root, \"zs\")\n        self.root_dec = os.path.join(self.root, \"reconstructions\")\n        self.root_inputs = os.path.join(self.root, \"inputs\")\n        os.makedirs(self.root, exist_ok=True)\n\n        if self.test_args.save_z:\n            os.makedirs(self.root_zs, exist_ok=True)\n        if self.test_args.save_reconstruction:\n            os.makedirs(self.root_dec, exist_ok=True)\n        if self.test_args.save_input:\n            os.makedirs(self.root_inputs, exist_ok=True)\n        assert (self.test_args is not None)\n        self.test_maximum = getattr(self.test_args, 'test_maximum', None)\n        self.count = 0\n        self.eval_metrics = {}\n        self.decodes = []\n        self.save_decode_samples = 2048\n\n    def init_from_ckpt(self, path, ignore_keys=list()):\n        sd = torch.load(path, map_location=\"cpu\")\n        try:\n            self._cur_epoch = sd['epoch']\n            sd = sd[\"state_dict\"]\n        except BaseException:\n            self._cur_epoch = 'null'\n        keys = list(sd.keys())\n        for k in keys:\n            for ik in ignore_keys:\n                if k.startswith(ik):\n                    print(\"Deleting key {} from state_dict.\".format(k))\n                    del sd[k]\n        self.load_state_dict(sd, strict=False)\n        # self.load_state_dict(sd, strict=True)\n        print(f\"Restored from {path}\")\n\n    def encode(self, x, return_hidden_states=False, **kwargs):\n        if return_hidden_states:\n            h, hidden = self.encoder(x, return_hidden_states)\n            moments = self.quant_conv(h)\n            posterior = DiagonalGaussianDistribution(moments)\n            return posterior, hidden\n        else:\n            h = self.encoder(x)\n            moments = self.quant_conv(h)\n            posterior = DiagonalGaussianDistribution(moments)\n            return posterior\n\n    def decode(self, z, **kwargs):\n        if len(kwargs) == 0:  # use the original decoder in AutoencoderKL\n            z = self.post_quant_conv(z)\n        dec = self.decoder(z, **kwargs)  # change for SVD decoder by adding **kwargs\n        return dec\n\n    def forward(self, input, sample_posterior=True, **additional_decode_kwargs):\n        input_tuple = (input, )\n        forward_temp = partial(self._forward, sample_posterior=sample_posterior, **additional_decode_kwargs)\n        return checkpoint(forward_temp, input_tuple, self.parameters(), self.use_checkpoint)\n\n    def _forward(self, input, sample_posterior=True, **additional_decode_kwargs):\n        posterior = self.encode(input)\n        if sample_posterior:\n            z = posterior.sample()\n        else:\n            z = posterior.mode()\n        dec = self.decode(z, **additional_decode_kwargs)\n        # print(input.shape, dec.shape) torch.Size([16, 3, 256, 256]) torch.Size([16, 3, 256, 256])\n        return dec, posterior\n\n    def get_input(self, batch, k):\n        x = batch[k]\n        if x.dim() == 5 and self.input_dim == 4:\n            b, c, t, h, w = x.shape\n            self.b = b\n            self.t = t\n            x = rearrange(x, 'b c t h w -> (b t) c h w')\n\n        return x\n\n    def training_step(self, batch, batch_idx, optimizer_idx):\n        inputs = self.get_input(batch, self.image_key)\n        reconstructions, posterior = self(inputs)\n\n        if optimizer_idx == 0:\n            # train encoder+decoder+logvar\n            aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,\n                                            last_layer=self.get_last_layer(), split=\"train\")\n            self.log(\"aeloss\", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)\n            self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)\n            return aeloss\n\n        if optimizer_idx == 1:\n            # train the discriminator\n            discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,\n                                                last_layer=self.get_last_layer(), split=\"train\")\n\n            self.log(\"discloss\", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)\n            self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)\n            return discloss\n\n    def validation_step(self, batch, batch_idx):\n        inputs = self.get_input(batch, self.image_key)\n        reconstructions, posterior = self(inputs)\n        aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,\n                                        last_layer=self.get_last_layer(), split=\"val\")\n\n        discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,\n                                            last_layer=self.get_last_layer(), split=\"val\")\n\n        self.log(\"val/rec_loss\", log_dict_ae[\"val/rec_loss\"])\n        self.log_dict(log_dict_ae)\n        self.log_dict(log_dict_disc)\n        return self.log_dict\n\n    def configure_optimizers(self):\n        lr = self.learning_rate\n        opt_ae = torch.optim.Adam(list(self.encoder.parameters()) +\n                                  list(self.decoder.parameters()) +\n                                  list(self.quant_conv.parameters()) +\n                                  list(self.post_quant_conv.parameters()),\n                                  lr=lr, betas=(0.5, 0.9))\n        opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),\n                                    lr=lr, betas=(0.5, 0.9))\n        return [opt_ae, opt_disc], []\n\n    def get_last_layer(self):\n        return self.decoder.conv_out.weight\n\n    @torch.no_grad()\n    def log_images(self, batch, only_inputs=False, **kwargs):\n        log = dict()\n        x = self.get_input(batch, self.image_key)\n        x = x.to(self.device)\n        if not only_inputs:\n            xrec, posterior = self(x)\n            if x.shape[1] > 3:\n                # colorize with random projection\n                assert xrec.shape[1] > 3\n                x = self.to_rgb(x)\n                xrec = self.to_rgb(xrec)\n            log[\"samples\"] = self.decode(torch.randn_like(posterior.sample()))\n            log[\"reconstructions\"] = xrec\n        log[\"inputs\"] = x\n        return log\n\n    def to_rgb(self, x):\n        assert self.image_key == \"segmentation\"\n        if not hasattr(self, \"colorize\"):\n            self.register_buffer(\"colorize\", torch.randn(3, x.shape[1], 1, 1).to(x))\n        x = F.conv2d(x, weight=self.colorize)\n        x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.\n        return x\n\n\nclass IdentityFirstStage(torch.nn.Module):\n    def __init__(self, *args, vq_interface=False, **kwargs):\n        self.vq_interface = vq_interface  # TODO: Should be true by default but check to not break older stuff\n        super().__init__()\n\n    def encode(self, x, *args, **kwargs):\n        return x\n\n    def decode(self, x, *args, **kwargs):\n        return x\n\n    def quantize(self, x, *args, **kwargs):\n        if self.vq_interface:\n            return x, None, [None, None, None]\n        return x\n\n    def forward(self, x, *args, **kwargs):\n        return x\n\n\nfrom lvdm.models.autoencoder_dualref import VideoDecoder\n\n\nclass AutoencoderKL_Dualref(AutoencoderKL):\n    def __init__(self,\n                 ddconfig,\n                 lossconfig,\n                 embed_dim,\n                 ckpt_path=None,\n                 ignore_keys=[],\n                 image_key=\"image\",\n                 colorize_nlabels=None,\n                 monitor=None,\n                 test=False,\n                 logdir=None,\n                 input_dim=4,\n                 test_args=None,\n                 additional_decode_keys=None,\n                 use_checkpoint=False,\n                 diff_boost_factor=3.0,\n                 ):\n        super().__init__(ddconfig, lossconfig, embed_dim, ckpt_path, ignore_keys, image_key, colorize_nlabels, monitor, test, logdir, input_dim, test_args, additional_decode_keys, use_checkpoint, diff_boost_factor)\n        self.decoder = VideoDecoder(**ddconfig)\n\n    def _forward(self, input, sample_posterior=True, **additional_decode_kwargs):\n        posterior, hidden_states = self.encode(input, return_hidden_states=True)\n\n        hidden_states_first_last = []\n        # use only the first and last hidden states\n        for hid in hidden_states:\n            hid = rearrange(hid, '(b t) c h w -> b c t h w', t=TIMESTEPS)\n            hid_new = torch.cat([hid[:, :, 0:1], hid[:, :, -1:]], dim=2)\n            hidden_states_first_last.append(hid_new)\n\n        if sample_posterior:\n            z = posterior.sample()\n        else:\n            z = posterior.mode()\n        dec = self.decode(z, ref_context=hidden_states_first_last, **additional_decode_kwargs)\n        # print(input.shape, dec.shape) torch.Size([16, 3, 256, 256]) torch.Size([16, 3, 256, 256])\n        return dec, posterior\n"
  },
  {
    "path": "ToonCrafter/lvdm/models/autoencoder_dualref.py",
    "content": "#### https://github.com/Stability-AI/generative-models\nfrom einops import rearrange, repeat\nimport logging\nfrom typing import Any, Callable, Optional, Iterable, Union\n\nimport os\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom packaging import version\nlogpy = logging.getLogger(__name__)\n\ntry:\n    import xformers\n    import xformers.ops\n    XFORMERS_IS_AVAILABLE = True\nexcept BaseException:\n    XFORMERS_IS_AVAILABLE = False\n    logpy.warning(\"no module 'xformers'. Processing without...\")\n\nfrom lvdm.modules.attention_svd import LinearAttention, CrossAttention, MemoryEfficientCrossAttention\n\n\ndef nonlinearity(x):\n    # swish\n    return x * torch.sigmoid(x)\n\n\ndef Normalize(in_channels, num_groups=32):\n    return torch.nn.GroupNorm(\n        num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True\n    )\n\n\nclass ResnetBlock(nn.Module):\n    def __init__(\n        self,\n        *,\n        in_channels,\n        out_channels=None,\n        conv_shortcut=False,\n        dropout,\n        temb_channels=512,\n    ):\n        super().__init__()\n        self.in_channels = in_channels\n        out_channels = in_channels if out_channels is None else out_channels\n        self.out_channels = out_channels\n        self.use_conv_shortcut = conv_shortcut\n\n        self.norm1 = Normalize(in_channels)\n        self.conv1 = torch.nn.Conv2d(\n            in_channels, out_channels, kernel_size=3, stride=1, padding=1\n        )\n        if temb_channels > 0:\n            self.temb_proj = torch.nn.Linear(temb_channels, out_channels)\n        self.norm2 = Normalize(out_channels)\n        self.dropout = torch.nn.Dropout(dropout)\n        self.conv2 = torch.nn.Conv2d(\n            out_channels, out_channels, kernel_size=3, stride=1, padding=1\n        )\n        if self.in_channels != self.out_channels:\n            if self.use_conv_shortcut:\n                self.conv_shortcut = torch.nn.Conv2d(\n                    in_channels, out_channels, kernel_size=3, stride=1, padding=1\n                )\n            else:\n                self.nin_shortcut = torch.nn.Conv2d(\n                    in_channels, out_channels, kernel_size=1, stride=1, padding=0\n                )\n\n    def forward(self, x, temb):\n        h = x\n        h = self.norm1(h)\n        h = nonlinearity(h)\n        h = self.conv1(h)\n\n        if temb is not None:\n            h = h + self.temb_proj(nonlinearity(temb))[:, :, None, None]\n\n        h = self.norm2(h)\n        h = nonlinearity(h)\n        h = self.dropout(h)\n        h = self.conv2(h)\n\n        if self.in_channels != self.out_channels:\n            if self.use_conv_shortcut:\n                x = self.conv_shortcut(x)\n            else:\n                x = self.nin_shortcut(x)\n\n        return x + h\n\n\nclass LinAttnBlock(LinearAttention):\n    \"\"\"to match AttnBlock usage\"\"\"\n\n    def __init__(self, in_channels):\n        super().__init__(dim=in_channels, heads=1, dim_head=in_channels)\n\n\nclass AttnBlock(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.k = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.v = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.proj_out = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n\n    def attention(self, h_: torch.Tensor) -> torch.Tensor:\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        b, c, h, w = q.shape\n        q, k, v = map(\n            lambda x: rearrange(x, \"b c h w -> b 1 (h w) c\").contiguous(), (q, k, v)\n        )\n        h_ = torch.nn.functional.scaled_dot_product_attention(\n            q, k, v\n        )  # scale is dim ** -0.5 per default\n        # compute attention\n\n        return rearrange(h_, \"b 1 (h w) c -> b c h w\", h=h, w=w, c=c, b=b)\n\n    def forward(self, x, **kwargs):\n        h_ = x\n        h_ = self.attention(h_)\n        h_ = self.proj_out(h_)\n        return x + h_\n\n\nclass MemoryEfficientAttnBlock(nn.Module):\n    \"\"\"\n    Uses xformers efficient implementation,\n    see https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223\n    Note: this is a single-head self-attention operation\n    \"\"\"\n\n    #\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.k = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.v = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.proj_out = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.attention_op: Optional[Any] = None\n\n    def attention(self, h_: torch.Tensor) -> torch.Tensor:\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        B, C, H, W = q.shape\n        q, k, v = map(lambda x: rearrange(x, \"b c h w -> b (h w) c\"), (q, k, v))\n\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(B, t.shape[1], 1, C)\n            .permute(0, 2, 1, 3)\n            .reshape(B * 1, t.shape[1], C)\n            .contiguous(),\n            (q, k, v),\n        )\n        out = xformers.ops.memory_efficient_attention(\n            q, k, v, attn_bias=None, op=self.attention_op\n        )\n\n        out = (\n            out.unsqueeze(0)\n            .reshape(B, 1, out.shape[1], C)\n            .permute(0, 2, 1, 3)\n            .reshape(B, out.shape[1], C)\n        )\n        return rearrange(out, \"b (h w) c -> b c h w\", b=B, h=H, w=W, c=C)\n\n    def forward(self, x, **kwargs):\n        h_ = x\n        h_ = self.attention(h_)\n        h_ = self.proj_out(h_)\n        return x + h_\n\n\nclass CrossAttentionWrapper(CrossAttention):\n    def forward(self, x, context=None, mask=None, **unused_kwargs):\n        b, c, h, w = x.shape\n        x = rearrange(x, \"b c h w -> b 1 (h w) c\").contiguous()\n        out = super().forward(x, context=context, mask=mask)\n        out = rearrange(out, \"b 1 (h w) c -> b c h w\", h=h, w=w, c=c, b=b)\n        return x + out\n\n\nclass MemoryEfficientCrossAttentionWrapper(MemoryEfficientCrossAttention):\n    def forward(self, x, context=None, mask=None, **unused_kwargs):\n        b, c, h, w = x.shape\n        x = rearrange(x, \"b c h w -> b (h w) c\")\n        out = super().forward(x, context=context, mask=mask)\n        out = rearrange(out, \"b (h w) c -> b c h w\", h=h, w=w, c=c)\n        return x + out\n\n\ndef make_attn(in_channels, attn_type=\"vanilla\", attn_kwargs=None):\n    assert attn_type in [\n        \"vanilla\",\n        \"vanilla-xformers\",\n        \"cross-attn\",\n        \"memory-efficient-cross-attn\",\n        \"linear\",\n        \"none\",\n        \"cross-attn-fusion\",\n        \"memory-efficient-cross-attn-fusion\",\n    ], f\"attn_type {attn_type} unknown\"\n    if (\n        version.parse(torch.__version__) < version.parse(\"2.0.0\")\n        and attn_type != \"none\"\n    ):\n        assert XFORMERS_IS_AVAILABLE, (\n            f\"We do not support vanilla attention in {torch.__version__} anymore, \"\n            f\"as it is too expensive. Please install xformers via e.g. 'pip install xformers==0.0.16'\"\n        )\n        # attn_type = \"vanilla-xformers\"\n    logpy.info(f\"making attention of type '{attn_type}' with {in_channels} in_channels\")\n    if attn_type == \"vanilla\":\n        assert attn_kwargs is None\n        return AttnBlock(in_channels)\n    elif attn_type == \"vanilla-xformers\":\n        logpy.info(\n            f\"building MemoryEfficientAttnBlock with {in_channels} in_channels...\"\n        )\n        return MemoryEfficientAttnBlock(in_channels)\n    elif attn_type == \"cross-attn\":\n        attn_kwargs[\"query_dim\"] = in_channels\n        return CrossAttentionWrapper(**attn_kwargs)\n    elif attn_type == \"memory-efficient-cross-attn\":\n        attn_kwargs[\"query_dim\"] = in_channels\n        return MemoryEfficientCrossAttentionWrapper(**attn_kwargs)\n    elif attn_type == \"cross-attn-fusion\":\n        attn_kwargs[\"query_dim\"] = in_channels\n        return CrossAttentionWrapperFusion(**attn_kwargs)\n    elif attn_type == \"memory-efficient-cross-attn-fusion\":\n        attn_kwargs[\"query_dim\"] = in_channels\n        return MemoryEfficientCrossAttentionWrapperFusion(**attn_kwargs)\n    elif attn_type == \"none\":\n        return nn.Identity(in_channels)\n    else:\n        return LinAttnBlock(in_channels)\n\n\nclass CrossAttentionWrapperFusion(CrossAttention):\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0, **kwargs):\n        super().__init__(query_dim, context_dim, heads, dim_head, dropout, **kwargs)\n        self.dim_head = dim_head\n        self.norm = Normalize(query_dim)\n        nn.init.zeros_(self.to_out[0].weight)\n        nn.init.zeros_(self.to_out[0].bias)\n\n    def forward(self, x, context=None, mask=None):\n        if self.training:\n            return checkpoint(self._forward, x, context, mask, use_reentrant=False)\n        else:\n            return self._forward(x, context, mask)\n\n    def _forward(\n        self,\n        x,\n        context=None,\n        mask=None,\n    ):\n        bt, c, h, w = x.shape\n        h_ = self.norm(x)\n        h_ = rearrange(h_, \"b c h w -> b (h w) c\")\n        q = self.to_q(h_)\n\n        b, c, l, h, w = context.shape\n        context = rearrange(context, \"b c l h w -> (b l) (h w) c\")\n        k = self.to_k(context)\n        v = self.to_v(context)\n        k = rearrange(k, \"(b l) d c -> b l d c\", l=l)\n        k = torch.cat([k[:, [0] * (bt // b)], k[:, [1] * (bt // b)]], dim=2)\n        k = rearrange(k, \"b l d c -> (b l) d c\")\n\n        v = rearrange(v, \"(b l) d c -> b l d c\", l=l)\n        v = torch.cat([v[:, [0] * (bt // b)], v[:, [1] * (bt // b)]], dim=2)\n        v = rearrange(v, \"b l d c -> (b l) d c\")\n\n        b, _, _ = q.shape  # actually bt\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, t.shape[1], self.heads, self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * self.heads, t.shape[1], self.dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n        sdpa = torch.nn.functional.scaled_dot_product_attention\n\n        def slow_sdpa(q, k, v):\n            out_list = []\n            step = 10\n            for i in range(0, q.shape[0], step):\n                out_i = sdpa(q[i:i + step], k[i:i + step], v[i:i + step])\n                out_list.append(out_i)\n            return torch.cat(out_list, dim=0)\n        # mps 将 qkv 分十次处理, 避免内存溢出\n        if os.environ.get(\"TOON_MEM_STRATEGY\", \"none\") == \"low\":\n            out = slow_sdpa(q, k, v)\n        else:\n            out = sdpa(q, k, v)\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, self.heads, out.shape[1], self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, out.shape[1], self.heads * self.dim_head)\n        )\n        out = self.to_out(out)\n        out = rearrange(out, \"bt (h w) c -> bt c h w\", h=h, w=w, c=c)\n        return x + out\n\n\nclass MemoryEfficientCrossAttentionWrapperFusion(MemoryEfficientCrossAttention):\n    # print('x.shape: ',x.shape, 'context.shape: ',context.shape) ##torch.Size([8, 128, 256, 256]) torch.Size([1, 128, 2, 256, 256])\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0, **kwargs):\n        super().__init__(query_dim, context_dim, heads, dim_head, dropout, **kwargs)\n        self.norm = Normalize(query_dim)\n        nn.init.zeros_(self.to_out[0].weight)\n        nn.init.zeros_(self.to_out[0].bias)\n\n    def forward(self, x, context=None, mask=None):\n        if self.training:\n            return checkpoint(self._forward, x, context, mask, use_reentrant=False)\n        else:\n            return self._forward(x, context, mask)\n\n    def _forward(\n        self,\n        x,\n        context=None,\n        mask=None,\n    ):\n        bt, c, h, w = x.shape\n        h_ = self.norm(x)\n        h_ = rearrange(h_, \"b c h w -> b (h w) c\")\n        q = self.to_q(h_)\n\n        b, c, l, h, w = context.shape\n        context = rearrange(context, \"b c l h w -> (b l) (h w) c\")\n        k = self.to_k(context)\n        v = self.to_v(context)\n        k = rearrange(k, \"(b l) d c -> b l d c\", l=l)\n        k = torch.cat([k[:, [0] * (bt // b)], k[:, [1] * (bt // b)]], dim=2)\n        k = rearrange(k, \"b l d c -> (b l) d c\")\n\n        v = rearrange(v, \"(b l) d c -> b l d c\", l=l)\n        v = torch.cat([v[:, [0] * (bt // b)], v[:, [1] * (bt // b)]], dim=2)\n        v = rearrange(v, \"b l d c -> (b l) d c\")\n\n        b, _, _ = q.shape  # actually bt\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, t.shape[1], self.heads, self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * self.heads, t.shape[1], self.dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n\n        # actually compute the attention, what we cannot get enough of\n        if version.parse(xformers.__version__) >= version.parse(\"0.0.21\"):\n            # NOTE: workaround for\n            # https://github.com/facebookresearch/xformers/issues/845\n            max_bs = 32768\n            N = q.shape[0]\n            n_batches = math.ceil(N / max_bs)\n            out = list()\n            for i_batch in range(n_batches):\n                batch = slice(i_batch * max_bs, (i_batch + 1) * max_bs)\n                out.append(\n                    xformers.ops.memory_efficient_attention(\n                        q[batch],\n                        k[batch],\n                        v[batch],\n                        attn_bias=None,\n                        op=self.attention_op,\n                    )\n                )\n            out = torch.cat(out, 0)\n        else:\n            out = xformers.ops.memory_efficient_attention(\n                q, k, v, attn_bias=None, op=self.attention_op\n            )\n\n        # TODO: Use this directly in the attention operation, as a bias\n        if exists(mask):\n            raise NotImplementedError\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, self.heads, out.shape[1], self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, out.shape[1], self.heads * self.dim_head)\n        )\n        out = self.to_out(out)\n        out = rearrange(out, \"bt (h w) c -> bt c h w\", h=h, w=w, c=c)\n        return x + out\n\n\nclass Combiner(nn.Module):\n    def __init__(self, ch) -> None:\n        super().__init__()\n        self.conv = nn.Conv2d(ch, ch, 1, padding=0)\n\n        nn.init.zeros_(self.conv.weight)\n        nn.init.zeros_(self.conv.bias)\n\n    def forward(self, x, context):\n        if self.training:\n            return checkpoint(self._forward, x, context, use_reentrant=False)\n        else:\n            return self._forward(x, context)\n\n    def _forward(self, x, context):\n        # x: b c h w, context: b c 2 h w\n        b, c, l, h, w = context.shape\n        bt, c, h, w = x.shape\n        context = rearrange(context, \"b c l h w -> (b l) c h w\")\n        context = self.conv(context)\n        context = rearrange(context, \"(b l) c h w -> b c l h w\", l=l)\n        x = rearrange(x, \"(b t) c h w -> b c t h w\", t=bt // b)\n        x[:, :, 0] = x[:, :, 0] + context[:, :, 0]\n        x[:, :, -1] = x[:, :, -1] + context[:, :, 1]\n        x = rearrange(x, \"b c t h w -> (b t) c h w\")\n        return x\n\n\nclass Decoder(nn.Module):\n    def __init__(\n        self,\n        *,\n        ch,\n        out_ch,\n        ch_mult=(1, 2, 4, 8),\n        num_res_blocks,\n        attn_resolutions,\n        dropout=0.0,\n        resamp_with_conv=True,\n        in_channels,\n        resolution,\n        z_channels,\n        give_pre_end=False,\n        tanh_out=False,\n        use_linear_attn=False,\n        attn_type=\"vanilla-xformers\",\n        attn_level=[2, 3],\n        **ignorekwargs,\n    ):\n        if \"vanilla-xformers\" and XFORMERS_IS_AVAILABLE is False:\n            attn_type = \"vanilla\"\n        super().__init__()\n        if use_linear_attn:\n            attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n        self.give_pre_end = give_pre_end\n        self.tanh_out = tanh_out\n        self.attn_level = attn_level\n        # compute in_ch_mult, block_in and curr_res at lowest res\n        in_ch_mult = (1,) + tuple(ch_mult)\n        block_in = ch * ch_mult[self.num_resolutions - 1]\n        curr_res = resolution // 2 ** (self.num_resolutions - 1)\n        self.z_shape = (1, z_channels, curr_res, curr_res)\n        logpy.info(\n            \"Working with z of shape {} = {} dimensions.\".format(\n                self.z_shape, np.prod(self.z_shape)\n            )\n        )\n\n        make_attn_cls = self._make_attn()\n        make_resblock_cls = self._make_resblock()\n        make_conv_cls = self._make_conv()\n        # z to block_in\n        self.conv_in = torch.nn.Conv2d(\n            z_channels, block_in, kernel_size=3, stride=1, padding=1\n        )\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = make_resblock_cls(\n            in_channels=block_in,\n            out_channels=block_in,\n            temb_channels=self.temb_ch,\n            dropout=dropout,\n        )\n        self.mid.attn_1 = make_attn_cls(block_in, attn_type=attn_type)\n        self.mid.block_2 = make_resblock_cls(\n            in_channels=block_in,\n            out_channels=block_in,\n            temb_channels=self.temb_ch,\n            dropout=dropout,\n        )\n\n        # upsampling\n        self.up = nn.ModuleList()\n        self.attn_refinement = nn.ModuleList()\n        for i_level in reversed(range(self.num_resolutions)):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_out = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks + 1):\n                block.append(\n                    make_resblock_cls(\n                        in_channels=block_in,\n                        out_channels=block_out,\n                        temb_channels=self.temb_ch,\n                        dropout=dropout,\n                    )\n                )\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn_cls(block_in, attn_type=attn_type))\n            up = nn.Module()\n            up.block = block\n            up.attn = attn\n            if i_level != 0:\n                up.upsample = Upsample(block_in, resamp_with_conv)\n                curr_res = curr_res * 2\n            self.up.insert(0, up)  # prepend to get consistent order\n\n            if i_level in self.attn_level:\n                _attn_type = 'memory-efficient-cross-attn-fusion' if XFORMERS_IS_AVAILABLE else 'cross-attn-fusion'\n                self.attn_refinement.insert(0, make_attn_cls(block_in, attn_type=_attn_type, attn_kwargs={}))\n            else:\n                self.attn_refinement.insert(0, Combiner(block_in))\n        # end\n        self.norm_out = Normalize(block_in)\n        self.attn_refinement.append(Combiner(block_in))\n        self.conv_out = make_conv_cls(\n            block_in, out_ch, kernel_size=3, stride=1, padding=1\n        )\n\n    def _make_attn(self) -> Callable:\n        return make_attn\n\n    def _make_resblock(self) -> Callable:\n        return ResnetBlock\n\n    def _make_conv(self) -> Callable:\n        return torch.nn.Conv2d\n\n    def get_last_layer(self, **kwargs):\n        return self.conv_out.weight\n\n    def forward(self, z, ref_context=None, **kwargs):\n        # ref_context: b c 2 h w, 2 means starting and ending frame\n        # assert z.shape[1:] == self.z_shape[1:]\n        self.last_z_shape = z.shape\n        # timestep embedding\n        temb = None\n\n        # z to block_in\n        h = self.conv_in(z)\n\n        # middle\n        h = self.mid.block_1(h, temb, **kwargs)\n        h = self.mid.attn_1(h, **kwargs)\n        h = self.mid.block_2(h, temb, **kwargs)\n\n        # upsampling\n        for i_level in reversed(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks + 1):\n                h = self.up[i_level].block[i_block](h, temb, **kwargs)\n                if len(self.up[i_level].attn) > 0:\n                    h = self.up[i_level].attn[i_block](h, **kwargs)\n            if ref_context:\n                h = self.attn_refinement[i_level](x=h, context=ref_context[i_level])\n            if i_level != 0:\n                h = self.up[i_level].upsample(h)\n\n        # end\n        if self.give_pre_end:\n            return h\n\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        if ref_context:\n            # print(h.shape, ref_context[i_level].shape) #torch.Size([8, 128, 256, 256]) torch.Size([1, 128, 2, 256, 256])\n            h = self.attn_refinement[-1](x=h, context=ref_context[-1])\n        h = self.conv_out(h, **kwargs)\n        if self.tanh_out:\n            h = torch.tanh(h)\n        return h\n\n#####\n\n\nfrom abc import abstractmethod\nfrom lvdm.models.utils_diffusion import timestep_embedding\n\nfrom torch.utils.checkpoint import checkpoint\nfrom lvdm.basics import (\n    zero_module,\n    conv_nd,\n    linear,\n    normalization,\n)\nfrom lvdm.modules.networks.openaimodel3d import Upsample, Downsample\n\n\nclass TimestepBlock(nn.Module):\n    \"\"\"\n    Any module where forward() takes timestep embeddings as a second argument.\n    \"\"\"\n\n    @abstractmethod\n    def forward(self, x: torch.Tensor, emb: torch.Tensor):\n        \"\"\"\n        Apply the module to `x` given `emb` timestep embeddings.\n        \"\"\"\n\n\nclass ResBlock(TimestepBlock):\n    \"\"\"\n    A residual block that can optionally change the number of channels.\n    :param channels: the number of input channels.\n    :param emb_channels: the number of timestep embedding channels.\n    :param dropout: the rate of dropout.\n    :param out_channels: if specified, the number of out channels.\n    :param use_conv: if True and out_channels is specified, use a spatial\n        convolution instead of a smaller 1x1 convolution to change the\n        channels in the skip connection.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param use_checkpoint: if True, use gradient checkpointing on this module.\n    :param up: if True, use this block for upsampling.\n    :param down: if True, use this block for downsampling.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels: int,\n        emb_channels: int,\n        dropout: float,\n        out_channels: Optional[int] = None,\n        use_conv: bool = False,\n        use_scale_shift_norm: bool = False,\n        dims: int = 2,\n        use_checkpoint: bool = False,\n        up: bool = False,\n        down: bool = False,\n        kernel_size: int = 3,\n        exchange_temb_dims: bool = False,\n        skip_t_emb: bool = False,\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.dropout = dropout\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n        self.exchange_temb_dims = exchange_temb_dims\n\n        if isinstance(kernel_size, Iterable):\n            padding = [k // 2 for k in kernel_size]\n        else:\n            padding = kernel_size // 2\n\n        self.in_layers = nn.Sequential(\n            normalization(channels),\n            nn.SiLU(),\n            conv_nd(dims, channels, self.out_channels, kernel_size, padding=padding),\n        )\n\n        self.updown = up or down\n\n        if up:\n            self.h_upd = Upsample(channels, False, dims)\n            self.x_upd = Upsample(channels, False, dims)\n        elif down:\n            self.h_upd = Downsample(channels, False, dims)\n            self.x_upd = Downsample(channels, False, dims)\n        else:\n            self.h_upd = self.x_upd = nn.Identity()\n\n        self.skip_t_emb = skip_t_emb\n        self.emb_out_channels = (\n            2 * self.out_channels if use_scale_shift_norm else self.out_channels\n        )\n        if self.skip_t_emb:\n            # print(f\"Skipping timestep embedding in {self.__class__.__name__}\")\n            assert not self.use_scale_shift_norm\n            self.emb_layers = None\n            self.exchange_temb_dims = False\n        else:\n            self.emb_layers = nn.Sequential(\n                nn.SiLU(),\n                linear(\n                    emb_channels,\n                    self.emb_out_channels,\n                ),\n            )\n\n        self.out_layers = nn.Sequential(\n            normalization(self.out_channels),\n            nn.SiLU(),\n            nn.Dropout(p=dropout),\n            zero_module(\n                conv_nd(\n                    dims,\n                    self.out_channels,\n                    self.out_channels,\n                    kernel_size,\n                    padding=padding,\n                )\n            ),\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = conv_nd(\n                dims, channels, self.out_channels, kernel_size, padding=padding\n            )\n        else:\n            self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)\n\n    def forward(self, x: torch.Tensor, emb: torch.Tensor) -> torch.Tensor:\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        if self.use_checkpoint:\n            return checkpoint(self._forward, x, emb, use_reentrant=False)\n        else:\n            return self._forward(x, emb)\n\n    def _forward(self, x: torch.Tensor, emb: torch.Tensor) -> torch.Tensor:\n        if self.updown:\n            in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]\n            h = in_rest(x)\n            h = self.h_upd(h)\n            x = self.x_upd(x)\n            h = in_conv(h)\n        else:\n            h = self.in_layers(x)\n\n        if self.skip_t_emb:\n            emb_out = torch.zeros_like(h)\n        else:\n            emb_out = self.emb_layers(emb).type(h.dtype)\n        while len(emb_out.shape) < len(h.shape):\n            emb_out = emb_out[..., None]\n        if self.use_scale_shift_norm:\n            out_norm, out_rest = self.out_layers[0], self.out_layers[1:]\n            scale, shift = torch.chunk(emb_out, 2, dim=1)\n            h = out_norm(h) * (1 + scale) + shift\n            h = out_rest(h)\n        else:\n            if self.exchange_temb_dims:\n                emb_out = rearrange(emb_out, \"b t c ... -> b c t ...\")\n            h = h + emb_out\n            h = self.out_layers(h)\n        return self.skip_connection(x) + h\n#####\n\n\n#####\nfrom lvdm.modules.attention_svd import *\n\n\nclass VideoTransformerBlock(nn.Module):\n    ATTENTION_MODES = {\n        \"softmax\": CrossAttention,\n        \"softmax-xformers\": MemoryEfficientCrossAttention,\n    }\n\n    def __init__(\n        self,\n        dim,\n        n_heads,\n        d_head,\n        dropout=0.0,\n        context_dim=None,\n        gated_ff=True,\n        checkpoint=True,\n        timesteps=None,\n        ff_in=False,\n        inner_dim=None,\n        attn_mode=\"softmax\",\n        disable_self_attn=False,\n        disable_temporal_crossattention=False,\n        switch_temporal_ca_to_sa=False,\n    ):\n        super().__init__()\n\n        attn_cls = self.ATTENTION_MODES[attn_mode]\n\n        self.ff_in = ff_in or inner_dim is not None\n        if inner_dim is None:\n            inner_dim = dim\n\n        assert int(n_heads * d_head) == inner_dim\n\n        self.is_res = inner_dim == dim\n\n        if self.ff_in:\n            self.norm_in = nn.LayerNorm(dim)\n            self.ff_in = FeedForward(\n                dim, dim_out=inner_dim, dropout=dropout, glu=gated_ff\n            )\n\n        self.timesteps = timesteps\n        self.disable_self_attn = disable_self_attn\n        if self.disable_self_attn:\n            self.attn1 = attn_cls(\n                query_dim=inner_dim,\n                heads=n_heads,\n                dim_head=d_head,\n                context_dim=context_dim,\n                dropout=dropout,\n            )  # is a cross-attention\n        else:\n            self.attn1 = attn_cls(\n                query_dim=inner_dim, heads=n_heads, dim_head=d_head, dropout=dropout\n            )  # is a self-attention\n\n        self.ff = FeedForward(inner_dim, dim_out=dim, dropout=dropout, glu=gated_ff)\n\n        if disable_temporal_crossattention:\n            if switch_temporal_ca_to_sa:\n                raise ValueError\n            else:\n                self.attn2 = None\n        else:\n            self.norm2 = nn.LayerNorm(inner_dim)\n            if switch_temporal_ca_to_sa:\n                self.attn2 = attn_cls(\n                    query_dim=inner_dim, heads=n_heads, dim_head=d_head, dropout=dropout\n                )  # is a self-attention\n            else:\n                self.attn2 = attn_cls(\n                    query_dim=inner_dim,\n                    context_dim=context_dim,\n                    heads=n_heads,\n                    dim_head=d_head,\n                    dropout=dropout,\n                )  # is self-attn if context is none\n\n        self.norm1 = nn.LayerNorm(inner_dim)\n        self.norm3 = nn.LayerNorm(inner_dim)\n        self.switch_temporal_ca_to_sa = switch_temporal_ca_to_sa\n\n        self.checkpoint = checkpoint\n        if self.checkpoint:\n            print(f\"====>{self.__class__.__name__} is using checkpointing\")\n        else:\n            print(f\"====>{self.__class__.__name__} is NOT using checkpointing\")\n\n    def forward(\n        self, x: torch.Tensor, context: torch.Tensor = None, timesteps: int = None\n    ) -> torch.Tensor:\n        if self.checkpoint:\n            return checkpoint(self._forward, x, context, timesteps, use_reentrant=False)\n        else:\n            return self._forward(x, context, timesteps=timesteps)\n\n    def _forward(self, x, context=None, timesteps=None):\n        assert self.timesteps or timesteps\n        assert not (self.timesteps and timesteps) or self.timesteps == timesteps\n        timesteps = self.timesteps or timesteps\n        B, S, C = x.shape\n        x = rearrange(x, \"(b t) s c -> (b s) t c\", t=timesteps)\n\n        if self.ff_in:\n            x_skip = x\n            x = self.ff_in(self.norm_in(x))\n            if self.is_res:\n                x += x_skip\n\n        if self.disable_self_attn:\n            x = self.attn1(self.norm1(x), context=context) + x\n        else:\n            x = self.attn1(self.norm1(x)) + x\n\n        if self.attn2 is not None:\n            if self.switch_temporal_ca_to_sa:\n                x = self.attn2(self.norm2(x)) + x\n            else:\n                x = self.attn2(self.norm2(x), context=context) + x\n        x_skip = x\n        x = self.ff(self.norm3(x))\n        if self.is_res:\n            x += x_skip\n\n        x = rearrange(\n            x, \"(b s) t c -> (b t) s c\", s=S, b=B // timesteps, c=C, t=timesteps\n        )\n        return x\n\n    def get_last_layer(self):\n        return self.ff.net[-1].weight\n\n#####\n\n\n#####\nimport functools\n\n\ndef partialclass(cls, *args, **kwargs):\n    class NewCls(cls):\n        __init__ = functools.partialmethod(cls.__init__, *args, **kwargs)\n\n    return NewCls\n######\n\n\nclass VideoResBlock(ResnetBlock):\n    def __init__(\n        self,\n        out_channels,\n        *args,\n        dropout=0.0,\n        video_kernel_size=3,\n        alpha=0.0,\n        merge_strategy=\"learned\",\n        **kwargs,\n    ):\n        super().__init__(out_channels=out_channels, dropout=dropout, *args, **kwargs)\n        if video_kernel_size is None:\n            video_kernel_size = [3, 1, 1]\n        self.time_stack = ResBlock(\n            channels=out_channels,\n            emb_channels=0,\n            dropout=dropout,\n            dims=3,\n            use_scale_shift_norm=False,\n            use_conv=False,\n            up=False,\n            down=False,\n            kernel_size=video_kernel_size,\n            use_checkpoint=True,\n            skip_t_emb=True,\n        )\n\n        self.merge_strategy = merge_strategy\n        if self.merge_strategy == \"fixed\":\n            self.register_buffer(\"mix_factor\", torch.Tensor([alpha]))\n        elif self.merge_strategy == \"learned\":\n            self.register_parameter(\n                \"mix_factor\", torch.nn.Parameter(torch.Tensor([alpha]))\n            )\n        else:\n            raise ValueError(f\"unknown merge strategy {self.merge_strategy}\")\n\n    def get_alpha(self, bs):\n        if self.merge_strategy == \"fixed\":\n            return self.mix_factor\n        elif self.merge_strategy == \"learned\":\n            return torch.sigmoid(self.mix_factor)\n        else:\n            raise NotImplementedError()\n\n    def forward(self, x, temb, skip_video=False, timesteps=None):\n        if timesteps is None:\n            timesteps = self.timesteps\n\n        b, c, h, w = x.shape\n\n        x = super().forward(x, temb)\n\n        if not skip_video:\n            x_mix = rearrange(x, \"(b t) c h w -> b c t h w\", t=timesteps)\n\n            x = rearrange(x, \"(b t) c h w -> b c t h w\", t=timesteps)\n\n            x = self.time_stack(x, temb)\n\n            alpha = self.get_alpha(bs=b // timesteps)\n            x = alpha * x + (1.0 - alpha) * x_mix\n\n            x = rearrange(x, \"b c t h w -> (b t) c h w\")\n        return x\n\n\nclass AE3DConv(torch.nn.Conv2d):\n    def __init__(self, in_channels, out_channels, video_kernel_size=3, *args, **kwargs):\n        super().__init__(in_channels, out_channels, *args, **kwargs)\n        if isinstance(video_kernel_size, Iterable):\n            padding = [int(k // 2) for k in video_kernel_size]\n        else:\n            padding = int(video_kernel_size // 2)\n\n        self.time_mix_conv = torch.nn.Conv3d(\n            in_channels=out_channels,\n            out_channels=out_channels,\n            kernel_size=video_kernel_size,\n            padding=padding,\n        )\n\n    def forward(self, input, timesteps, skip_video=False):\n        x = super().forward(input)\n        if skip_video:\n            return x\n        x = rearrange(x, \"(b t) c h w -> b c t h w\", t=timesteps)\n        x = self.time_mix_conv(x)\n        return rearrange(x, \"b c t h w -> (b t) c h w\")\n\n\nclass VideoBlock(AttnBlock):\n    def __init__(\n        self, in_channels: int, alpha: float = 0, merge_strategy: str = \"learned\"\n    ):\n        super().__init__(in_channels)\n        # no context, single headed, as in base class\n        self.time_mix_block = VideoTransformerBlock(\n            dim=in_channels,\n            n_heads=1,\n            d_head=in_channels,\n            checkpoint=True,\n            ff_in=True,\n            attn_mode=\"softmax\",\n        )\n\n        time_embed_dim = self.in_channels * 4\n        self.video_time_embed = torch.nn.Sequential(\n            torch.nn.Linear(self.in_channels, time_embed_dim),\n            torch.nn.SiLU(),\n            torch.nn.Linear(time_embed_dim, self.in_channels),\n        )\n\n        self.merge_strategy = merge_strategy\n        if self.merge_strategy == \"fixed\":\n            self.register_buffer(\"mix_factor\", torch.Tensor([alpha]))\n        elif self.merge_strategy == \"learned\":\n            self.register_parameter(\n                \"mix_factor\", torch.nn.Parameter(torch.Tensor([alpha]))\n            )\n        else:\n            raise ValueError(f\"unknown merge strategy {self.merge_strategy}\")\n\n    def forward(self, x, timesteps, skip_video=False):\n        if skip_video:\n            return super().forward(x)\n\n        x_in = x\n        x = self.attention(x)\n        h, w = x.shape[2:]\n        x = rearrange(x, \"b c h w -> b (h w) c\")\n\n        x_mix = x\n        num_frames = torch.arange(timesteps, device=x.device)\n        num_frames = repeat(num_frames, \"t -> b t\", b=x.shape[0] // timesteps)\n        num_frames = rearrange(num_frames, \"b t -> (b t)\")\n        t_emb = timestep_embedding(num_frames, self.in_channels, repeat_only=False)\n        emb = self.video_time_embed(t_emb)  # b, n_channels\n        emb = emb[:, None, :]\n        x_mix = x_mix + emb\n\n        alpha = self.get_alpha()\n        x_mix = self.time_mix_block(x_mix, timesteps=timesteps)\n        x = alpha * x + (1.0 - alpha) * x_mix  # alpha merge\n\n        x = rearrange(x, \"b (h w) c -> b c h w\", h=h, w=w)\n        x = self.proj_out(x)\n\n        return x_in + x\n\n    def get_alpha(\n        self,\n    ):\n        if self.merge_strategy == \"fixed\":\n            return self.mix_factor\n        elif self.merge_strategy == \"learned\":\n            return torch.sigmoid(self.mix_factor)\n        else:\n            raise NotImplementedError(f\"unknown merge strategy {self.merge_strategy}\")\n\n\nclass MemoryEfficientVideoBlock(MemoryEfficientAttnBlock):\n    def __init__(\n        self, in_channels: int, alpha: float = 0, merge_strategy: str = \"learned\"\n    ):\n        super().__init__(in_channels)\n        # no context, single headed, as in base class\n        self.time_mix_block = VideoTransformerBlock(\n            dim=in_channels,\n            n_heads=1,\n            d_head=in_channels,\n            checkpoint=True,\n            ff_in=True,\n            attn_mode=\"softmax-xformers\",\n        )\n\n        time_embed_dim = self.in_channels * 4\n        self.video_time_embed = torch.nn.Sequential(\n            torch.nn.Linear(self.in_channels, time_embed_dim),\n            torch.nn.SiLU(),\n            torch.nn.Linear(time_embed_dim, self.in_channels),\n        )\n\n        self.merge_strategy = merge_strategy\n        if self.merge_strategy == \"fixed\":\n            self.register_buffer(\"mix_factor\", torch.Tensor([alpha]))\n        elif self.merge_strategy == \"learned\":\n            self.register_parameter(\n                \"mix_factor\", torch.nn.Parameter(torch.Tensor([alpha]))\n            )\n        else:\n            raise ValueError(f\"unknown merge strategy {self.merge_strategy}\")\n\n    def forward(self, x, timesteps, skip_time_block=False):\n        if skip_time_block:\n            return super().forward(x)\n\n        x_in = x\n        x = self.attention(x)\n        h, w = x.shape[2:]\n        x = rearrange(x, \"b c h w -> b (h w) c\")\n\n        x_mix = x\n        num_frames = torch.arange(timesteps, device=x.device)\n        num_frames = repeat(num_frames, \"t -> b t\", b=x.shape[0] // timesteps)\n        num_frames = rearrange(num_frames, \"b t -> (b t)\")\n        t_emb = timestep_embedding(num_frames, self.in_channels, repeat_only=False)\n        emb = self.video_time_embed(t_emb)  # b, n_channels\n        emb = emb[:, None, :]\n        x_mix = x_mix + emb\n\n        alpha = self.get_alpha()\n        x_mix = self.time_mix_block(x_mix, timesteps=timesteps)\n        x = alpha * x + (1.0 - alpha) * x_mix  # alpha merge\n\n        x = rearrange(x, \"b (h w) c -> b c h w\", h=h, w=w)\n        x = self.proj_out(x)\n\n        return x_in + x\n\n    def get_alpha(\n        self,\n    ):\n        if self.merge_strategy == \"fixed\":\n            return self.mix_factor\n        elif self.merge_strategy == \"learned\":\n            return torch.sigmoid(self.mix_factor)\n        else:\n            raise NotImplementedError(f\"unknown merge strategy {self.merge_strategy}\")\n\n\ndef make_time_attn(\n    in_channels,\n    attn_type=\"vanilla\",\n    attn_kwargs=None,\n    alpha: float = 0,\n    merge_strategy: str = \"learned\",\n):\n    assert attn_type in [\n        \"vanilla\",\n        \"vanilla-xformers\",\n    ], f\"attn_type {attn_type} not supported for spatio-temporal attention\"\n    print(\n        f\"making spatial and temporal attention of type '{attn_type}' with {in_channels} in_channels\"\n    )\n    if not XFORMERS_IS_AVAILABLE and attn_type == \"vanilla-xformers\":\n        print(\n            f\"Attention mode '{attn_type}' is not available. Falling back to vanilla attention. \"\n            f\"This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version {torch.__version__}\"\n        )\n        attn_type = \"vanilla\"\n\n    if attn_type == \"vanilla\":\n        assert attn_kwargs is None\n        return partialclass(\n            VideoBlock, in_channels, alpha=alpha, merge_strategy=merge_strategy\n        )\n    elif attn_type == \"vanilla-xformers\":\n        print(f\"building MemoryEfficientAttnBlock with {in_channels} in_channels...\")\n        return partialclass(\n            MemoryEfficientVideoBlock,\n            in_channels,\n            alpha=alpha,\n            merge_strategy=merge_strategy,\n        )\n    else:\n        return NotImplementedError()\n\n\nclass Conv2DWrapper(torch.nn.Conv2d):\n    def forward(self, input: torch.Tensor, **kwargs) -> torch.Tensor:\n        return super().forward(input)\n\n\nclass VideoDecoder(Decoder):\n    available_time_modes = [\"all\", \"conv-only\", \"attn-only\"]\n\n    def __init__(\n        self,\n        *args,\n        video_kernel_size: Union[int, list] = [3, 1, 1],\n        alpha: float = 0.0,\n        merge_strategy: str = \"learned\",\n        time_mode: str = \"conv-only\",\n        **kwargs,\n    ):\n        self.video_kernel_size = video_kernel_size\n        self.alpha = alpha\n        self.merge_strategy = merge_strategy\n        self.time_mode = time_mode\n        assert (\n            self.time_mode in self.available_time_modes\n        ), f\"time_mode parameter has to be in {self.available_time_modes}\"\n        super().__init__(*args, **kwargs)\n\n    def get_last_layer(self, skip_time_mix=False, **kwargs):\n        if self.time_mode == \"attn-only\":\n            raise NotImplementedError(\"TODO\")\n        else:\n            return (\n                self.conv_out.time_mix_conv.weight\n                if not skip_time_mix\n                else self.conv_out.weight\n            )\n\n    def _make_attn(self) -> Callable:\n        if self.time_mode not in [\"conv-only\", \"only-last-conv\"]:\n            return partialclass(\n                make_time_attn,\n                alpha=self.alpha,\n                merge_strategy=self.merge_strategy,\n            )\n        else:\n            return super()._make_attn()\n\n    def _make_conv(self) -> Callable:\n        if self.time_mode != \"attn-only\":\n            return partialclass(AE3DConv, video_kernel_size=self.video_kernel_size)\n        else:\n            return Conv2DWrapper\n\n    def _make_resblock(self) -> Callable:\n        if self.time_mode not in [\"attn-only\", \"only-last-conv\"]:\n            return partialclass(\n                VideoResBlock,\n                video_kernel_size=self.video_kernel_size,\n                alpha=self.alpha,\n                merge_strategy=self.merge_strategy,\n            )\n        else:\n            return super()._make_resblock()\n"
  },
  {
    "path": "ToonCrafter/lvdm/models/ddpm3d.py",
    "content": "\"\"\"\nwild mixture of\nhttps://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py\nhttps://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py\nhttps://github.com/CompVis/taming-transformers\n-- merci\n\"\"\"\n\nimport os\nfrom functools import partial\nfrom contextlib import contextmanager\nimport numpy as np\nfrom tqdm import tqdm\nfrom einops import rearrange, repeat\nimport logging\nmainlogger = logging.getLogger('mainlogger')\nimport random\nimport torch\nimport torch.nn as nn\nfrom torch.optim.lr_scheduler import LambdaLR, CosineAnnealingLR\nfrom torchvision.utils import make_grid\nimport pytorch_lightning as pl\nfrom pytorch_lightning.utilities import rank_zero_only\nfrom ToonCrafter.utils.utils import instantiate_from_config\nfrom lvdm.ema import LitEma\nfrom lvdm.models.samplers.ddim import DDIMSampler\nfrom lvdm.distributions import DiagonalGaussianDistribution\nfrom lvdm.models.utils_diffusion import make_beta_schedule, rescale_zero_terminal_snr\nfrom lvdm.basics import disabled_train\nfrom lvdm.common import (\n    extract_into_tensor,\n    noise_like,\n    exists,\n    default\n)\nimport math\nfrom lvdm.models.autoencoder_dualref import VideoDecoder\n__conditioning_keys__ = {'concat': 'c_concat',\n                         'crossattn': 'c_crossattn',\n                         'adm': 'y'}\n\nclass DDPM(pl.LightningModule):\n    # classic DDPM with Gaussian diffusion, in image space\n    def __init__(self,\n                 unet_config,\n                 timesteps=1000,\n                 beta_schedule=\"linear\",\n                 loss_type=\"l2\",\n                 ckpt_path=None,\n                 ignore_keys=[],\n                 load_only_unet=False,\n                 monitor=None,\n                 use_ema=True,\n                 first_stage_key=\"image\",\n                 image_size=256,\n                 channels=3,\n                 log_every_t=100,\n                 clip_denoised=True,\n                 linear_start=1e-4,\n                 linear_end=2e-2,\n                 cosine_s=8e-3,\n                 given_betas=None,\n                 original_elbo_weight=0.,\n                 v_posterior=0.,  # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta\n                 l_simple_weight=1.,\n                 conditioning_key=None,\n                 parameterization=\"eps\",  # all assuming fixed variance schedules\n                 scheduler_config=None,\n                 use_positional_encodings=False,\n                 learn_logvar=False,\n                 logvar_init=0.,\n                 rescale_betas_zero_snr=False,\n                 ):\n        super().__init__()\n        assert parameterization in [\"eps\", \"x0\", \"v\"], 'currently only supporting \"eps\" and \"x0\" and \"v\"'\n        self.parameterization = parameterization\n        mainlogger.info(f\"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode\")\n        self.cond_stage_model = None\n        self.clip_denoised = clip_denoised\n        self.log_every_t = log_every_t\n        self.first_stage_key = first_stage_key\n        self.channels = channels\n        self.temporal_length = unet_config.params.temporal_length\n        self.image_size = image_size  # try conv?\n        if isinstance(self.image_size, int):\n            self.image_size = [self.image_size, self.image_size]\n        self.use_positional_encodings = use_positional_encodings\n        self.model = DiffusionWrapper(unet_config, conditioning_key)\n        #count_params(self.model, verbose=True)\n        self.use_ema = use_ema\n        self.rescale_betas_zero_snr = rescale_betas_zero_snr\n        if self.use_ema:\n            self.model_ema = LitEma(self.model)\n            mainlogger.info(f\"Keeping EMAs of {len(list(self.model_ema.buffers()))}.\")\n\n        self.use_scheduler = scheduler_config is not None\n        if self.use_scheduler:\n            self.scheduler_config = scheduler_config\n\n        self.v_posterior = v_posterior\n        self.original_elbo_weight = original_elbo_weight\n        self.l_simple_weight = l_simple_weight\n\n        if monitor is not None:\n            self.monitor = monitor\n        if ckpt_path is not None:\n            self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)\n\n        self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,\n                               linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)\n\n        ## for reschedule\n        self.given_betas = given_betas\n        self.beta_schedule = beta_schedule\n        self.timesteps = timesteps\n        self.cosine_s = cosine_s\n\n        self.loss_type = loss_type\n\n        self.learn_logvar = learn_logvar\n        self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))\n        if self.learn_logvar:\n            self.logvar = nn.Parameter(self.logvar, requires_grad=True)\n\n    def register_schedule(self, given_betas=None, beta_schedule=\"linear\", timesteps=1000,\n                          linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n        if exists(given_betas):\n            betas = given_betas\n        else:\n            betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,\n                                       cosine_s=cosine_s)\n        if self.rescale_betas_zero_snr:\n            betas = rescale_zero_terminal_snr(betas)\n        \n        alphas = 1. - betas\n        alphas_cumprod = np.cumprod(alphas, axis=0)\n        alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])\n\n        timesteps, = betas.shape\n        self.num_timesteps = int(timesteps)\n        self.linear_start = linear_start\n        self.linear_end = linear_end\n        assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'\n\n        to_torch = partial(torch.tensor, dtype=torch.float32)\n\n        self.register_buffer('betas', to_torch(betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))\n\n        if self.parameterization != 'v':\n            self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))\n            self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))\n        else:\n            self.register_buffer('sqrt_recip_alphas_cumprod', torch.zeros_like(to_torch(alphas_cumprod)))\n            self.register_buffer('sqrt_recipm1_alphas_cumprod', torch.zeros_like(to_torch(alphas_cumprod)))\n\n        # calculations for posterior q(x_{t-1} | x_t, x_0)\n        posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (\n                    1. - alphas_cumprod) + self.v_posterior * betas\n        # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)\n        self.register_buffer('posterior_variance', to_torch(posterior_variance))\n        # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain\n        self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))\n        self.register_buffer('posterior_mean_coef1', to_torch(\n            betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))\n        self.register_buffer('posterior_mean_coef2', to_torch(\n            (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))\n\n        if self.parameterization == \"eps\":\n            lvlb_weights = self.betas ** 2 / (\n                        2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))\n        elif self.parameterization == \"x0\":\n            lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))\n        elif self.parameterization == \"v\":\n            lvlb_weights = torch.ones_like(self.betas ** 2 / (\n                    2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)))\n        else:\n            raise NotImplementedError(\"mu not supported\")\n        # TODO how to choose this term\n        lvlb_weights[0] = lvlb_weights[1]\n        self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)\n        assert not torch.isnan(self.lvlb_weights).all()\n\n    @contextmanager\n    def ema_scope(self, context=None):\n        if self.use_ema:\n            self.model_ema.store(self.model.parameters())\n            self.model_ema.copy_to(self.model)\n            if context is not None:\n                mainlogger.info(f\"{context}: Switched to EMA weights\")\n        try:\n            yield None\n        finally:\n            if self.use_ema:\n                self.model_ema.restore(self.model.parameters())\n                if context is not None:\n                    mainlogger.info(f\"{context}: Restored training weights\")\n\n    def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):\n        sd = torch.load(path, map_location=\"cpu\")\n        if \"state_dict\" in list(sd.keys()):\n            sd = sd[\"state_dict\"]\n        keys = list(sd.keys())\n        for k in keys:\n            for ik in ignore_keys:\n                if k.startswith(ik):\n                    mainlogger.info(\"Deleting key {} from state_dict.\".format(k))\n                    del sd[k]\n        missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(\n            sd, strict=False)\n        mainlogger.info(f\"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys\")\n        if len(missing) > 0:\n            mainlogger.info(f\"Missing Keys: {missing}\")\n        if len(unexpected) > 0:\n            mainlogger.info(f\"Unexpected Keys: {unexpected}\")\n\n    def q_mean_variance(self, x_start, t):\n        \"\"\"\n        Get the distribution q(x_t | x_0).\n        :param x_start: the [N x C x ...] tensor of noiseless inputs.\n        :param t: the number of diffusion steps (minus 1). Here, 0 means one step.\n        :return: A tuple (mean, variance, log_variance), all of x_start's shape.\n        \"\"\"\n        mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)\n        variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)\n        log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)\n        return mean, variance, log_variance\n\n    def predict_start_from_noise(self, x_t, t, noise):\n        return (\n                extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -\n                extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise\n        )\n\n    def predict_start_from_z_and_v(self, x_t, t, v):\n        # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))\n        # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))\n        return (\n                extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t -\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v\n        )\n\n    def predict_eps_from_z_and_v(self, x_t, t, v):\n        return (\n                extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v +\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t\n        )\n\n    def q_posterior(self, x_start, x_t, t):\n        posterior_mean = (\n                extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +\n                extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t\n        )\n        posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)\n        posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)\n        return posterior_mean, posterior_variance, posterior_log_variance_clipped\n\n    def p_mean_variance(self, x, t, clip_denoised: bool):\n        model_out = self.model(x, t)\n        if self.parameterization == \"eps\":\n            x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)\n        elif self.parameterization == \"x0\":\n            x_recon = model_out\n        if clip_denoised:\n            x_recon.clamp_(-1., 1.)\n\n        model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)\n        return model_mean, posterior_variance, posterior_log_variance\n\n    @torch.no_grad()\n    def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):\n        b, *_, device = *x.shape, x.device\n        model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)\n        noise = noise_like(x.shape, device, repeat_noise)\n        # no noise when t == 0\n        nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))\n        return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise\n\n    @torch.no_grad()\n    def p_sample_loop(self, shape, return_intermediates=False):\n        device = self.betas.device\n        b = shape[0]\n        img = torch.randn(shape, device=device)\n        intermediates = [img]\n        for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):\n            img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),\n                                clip_denoised=self.clip_denoised)\n            if i % self.log_every_t == 0 or i == self.num_timesteps - 1:\n                intermediates.append(img)\n        if return_intermediates:\n            return img, intermediates\n        return img\n\n    @torch.no_grad()\n    def sample(self, batch_size=16, return_intermediates=False):\n        image_size = self.image_size\n        channels = self.channels\n        return self.p_sample_loop((batch_size, channels, image_size, image_size),\n                                  return_intermediates=return_intermediates)\n\n    def q_sample(self, x_start, t, noise=None):\n        noise = default(noise, lambda: torch.randn_like(x_start))\n        return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)\n\n    def get_v(self, x, noise, t):\n        return (\n                extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise -\n                extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x\n        )\n\n    def get_loss(self, pred, target, mean=True):\n        if self.loss_type == 'l1':\n            loss = (target - pred).abs()\n            if mean:\n                loss = loss.mean()\n        elif self.loss_type == 'l2':\n            if mean:\n                loss = torch.nn.functional.mse_loss(target, pred)\n            else:\n                loss = torch.nn.functional.mse_loss(target, pred, reduction='none')\n        else:\n            raise NotImplementedError(\"unknown loss type '{loss_type}'\")\n\n        return loss\n\n    def p_losses(self, x_start, t, noise=None):\n        noise = default(noise, lambda: torch.randn_like(x_start))\n        x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)\n        model_out = self.model(x_noisy, t)\n\n        loss_dict = {}\n        if self.parameterization == \"eps\":\n            target = noise\n        elif self.parameterization == \"x0\":\n            target = x_start\n        elif self.parameterization == \"v\":\n            target = self.get_v(x_start, noise, t)\n        else:\n            raise NotImplementedError(f\"Paramterization {self.parameterization} not yet supported\")\n\n        loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])\n\n        log_prefix = 'train' if self.training else 'val'\n\n        loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})\n        loss_simple = loss.mean() * self.l_simple_weight\n\n        loss_vlb = (self.lvlb_weights[t] * loss).mean()\n        loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})\n\n        loss = loss_simple + self.original_elbo_weight * loss_vlb\n\n        loss_dict.update({f'{log_prefix}/loss': loss})\n\n        return loss, loss_dict\n\n    def forward(self, x, *args, **kwargs):\n        # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size\n        # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'\n        t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()\n        return self.p_losses(x, t, *args, **kwargs)\n\n    def get_input(self, batch, k):\n        x = batch[k]\n        '''\n        if len(x.shape) == 3:\n            x = x[..., None]\n        x = rearrange(x, 'b h w c -> b c h w')\n        '''\n        x = x.to(memory_format=torch.contiguous_format).float()\n        return x\n\n    def shared_step(self, batch):\n        x = self.get_input(batch, self.first_stage_key)\n        loss, loss_dict = self(x)\n        return loss, loss_dict\n\n    def training_step(self, batch, batch_idx):\n        loss, loss_dict = self.shared_step(batch)\n\n        self.log_dict(loss_dict, prog_bar=True,\n                      logger=True, on_step=True, on_epoch=True)\n\n        self.log(\"global_step\", self.global_step,\n                 prog_bar=True, logger=True, on_step=True, on_epoch=False)\n\n        if self.use_scheduler:\n            lr = self.optimizers().param_groups[0]['lr']\n            self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)\n\n        return loss\n\n    @torch.no_grad()\n    def validation_step(self, batch, batch_idx):\n        _, loss_dict_no_ema = self.shared_step(batch)\n        with self.ema_scope():\n            _, loss_dict_ema = self.shared_step(batch)\n            loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}\n        self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)\n        self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)\n\n    def on_train_batch_end(self, *args, **kwargs):\n        if self.use_ema:\n            self.model_ema(self.model)\n\n    def _get_rows_from_list(self, samples):\n        n_imgs_per_row = len(samples)\n        denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')\n        denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')\n        denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)\n        return denoise_grid\n\n    @torch.no_grad()\n    def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):\n        log = dict()\n        x = self.get_input(batch, self.first_stage_key)\n        N = min(x.shape[0], N)\n        n_row = min(x.shape[0], n_row)\n        x = x.to(self.device)[:N]\n        log[\"inputs\"] = x\n\n        # get diffusion row\n        diffusion_row = list()\n        x_start = x[:n_row]\n\n        for t in range(self.num_timesteps):\n            if t % self.log_every_t == 0 or t == self.num_timesteps - 1:\n                t = repeat(torch.tensor([t]), '1 -> b', b=n_row)\n                t = t.to(self.device).long()\n                noise = torch.randn_like(x_start)\n                x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)\n                diffusion_row.append(x_noisy)\n\n        log[\"diffusion_row\"] = self._get_rows_from_list(diffusion_row)\n\n        if sample:\n            # get denoise row\n            with self.ema_scope(\"Plotting\"):\n                samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)\n\n            log[\"samples\"] = samples\n            log[\"denoise_row\"] = self._get_rows_from_list(denoise_row)\n\n        if return_keys:\n            if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:\n                return log\n            else:\n                return {key: log[key] for key in return_keys}\n        return log\n\n    def configure_optimizers(self):\n        lr = self.learning_rate\n        params = list(self.model.parameters())\n        if self.learn_logvar:\n            params = params + [self.logvar]\n        opt = torch.optim.AdamW(params, lr=lr)\n        return opt\n\nclass LatentDiffusion(DDPM):\n    \"\"\"main class\"\"\"\n    def __init__(self,\n                 first_stage_config,\n                 cond_stage_config,\n                 num_timesteps_cond=None,\n                 cond_stage_key=\"caption\",\n                 cond_stage_trainable=False,\n                 cond_stage_forward=None,\n                 conditioning_key=None,\n                 uncond_prob=0.2,\n                 uncond_type=\"empty_seq\",\n                 scale_factor=1.0,\n                 scale_by_std=False,\n                 encoder_type=\"2d\",\n                 only_model=False,\n                 noise_strength=0,\n                 use_dynamic_rescale=False,\n                 base_scale=0.7,\n                 turning_step=400,\n                 loop_video=False,\n                 fps_condition_type='fs',\n                 perframe_ae=False,\n                 # added\n                 logdir=None,\n                 rand_cond_frame=False,\n                 en_and_decode_n_samples_a_time=None,\n                 *args, **kwargs):\n        self.num_timesteps_cond = default(num_timesteps_cond, 1)\n        self.scale_by_std = scale_by_std\n        assert self.num_timesteps_cond <= kwargs['timesteps']\n        # for backwards compatibility after implementation of DiffusionWrapper\n        ckpt_path = kwargs.pop(\"ckpt_path\", None)\n        ignore_keys = kwargs.pop(\"ignore_keys\", [])\n        conditioning_key = default(conditioning_key, 'crossattn')\n        super().__init__(conditioning_key=conditioning_key, *args, **kwargs)\n\n        self.cond_stage_trainable = cond_stage_trainable\n        self.cond_stage_key = cond_stage_key\n        self.noise_strength = noise_strength\n        self.use_dynamic_rescale = use_dynamic_rescale\n        self.loop_video = loop_video\n        self.fps_condition_type = fps_condition_type\n        self.perframe_ae = perframe_ae\n\n        self.logdir = logdir\n        self.rand_cond_frame = rand_cond_frame\n        self.en_and_decode_n_samples_a_time = en_and_decode_n_samples_a_time\n\n        try:\n            self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1\n        except:\n            self.num_downs = 0\n        if not scale_by_std:\n            self.scale_factor = scale_factor\n        else:\n            self.register_buffer('scale_factor', torch.tensor(scale_factor))\n\n        if use_dynamic_rescale:\n            scale_arr1 = np.linspace(1.0, base_scale, turning_step)\n            scale_arr2 = np.full(self.num_timesteps, base_scale)\n            scale_arr = np.concatenate((scale_arr1, scale_arr2))\n            to_torch = partial(torch.tensor, dtype=torch.float32)\n            self.register_buffer('scale_arr', to_torch(scale_arr))\n\n        self.instantiate_first_stage(first_stage_config)\n        self.instantiate_cond_stage(cond_stage_config)\n        self.first_stage_config = first_stage_config\n        self.cond_stage_config = cond_stage_config        \n        self.clip_denoised = False\n\n        self.cond_stage_forward = cond_stage_forward\n        self.encoder_type = encoder_type\n        assert(encoder_type in [\"2d\", \"3d\"])\n        self.uncond_prob = uncond_prob\n        self.classifier_free_guidance = True if uncond_prob > 0 else False\n        assert(uncond_type in [\"zero_embed\", \"empty_seq\"])\n        self.uncond_type = uncond_type\n\n        self.restarted_from_ckpt = False\n        if ckpt_path is not None:\n            self.init_from_ckpt(ckpt_path, ignore_keys, only_model=only_model)\n            self.restarted_from_ckpt = True\n                \n    def make_cond_schedule(self, ):\n        self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)\n        ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()\n        self.cond_ids[:self.num_timesteps_cond] = ids\n\n    @rank_zero_only\n    @torch.no_grad()\n    def on_train_batch_start(self, batch, batch_idx, dataloader_idx=None):\n        # only for very first batch, reset the self.scale_factor\n        if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and \\\n                not self.restarted_from_ckpt:\n            assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'\n            # set rescale weight to 1./std of encodings\n            mainlogger.info(\"### USING STD-RESCALING ###\")\n            x = super().get_input(batch, self.first_stage_key)\n            x = x.to(self.device)\n            encoder_posterior = self.encode_first_stage(x)\n            z = self.get_first_stage_encoding(encoder_posterior).detach()\n            del self.scale_factor\n            self.register_buffer('scale_factor', 1. / z.flatten().std())\n            mainlogger.info(f\"setting self.scale_factor to {self.scale_factor}\")\n            mainlogger.info(\"### USING STD-RESCALING ###\")\n            mainlogger.info(f\"std={z.flatten().std()}\")\n\n    def register_schedule(self, given_betas=None, beta_schedule=\"linear\", timesteps=1000,\n                          linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n        super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)\n\n        self.shorten_cond_schedule = self.num_timesteps_cond > 1\n        if self.shorten_cond_schedule:\n            self.make_cond_schedule()\n\n    def instantiate_first_stage(self, config):\n        model = instantiate_from_config(config)\n        self.first_stage_model = model.eval()\n        self.first_stage_model.train = disabled_train\n        for param in self.first_stage_model.parameters():\n            param.requires_grad = False\n\n    def instantiate_cond_stage(self, config):\n        if not self.cond_stage_trainable:\n            model = instantiate_from_config(config)\n            self.cond_stage_model = model.eval()\n            self.cond_stage_model.train = disabled_train\n            for param in self.cond_stage_model.parameters():\n                param.requires_grad = False\n        else:\n            model = instantiate_from_config(config)\n            self.cond_stage_model = model\n    \n    def get_learned_conditioning(self, c):\n        if self.cond_stage_forward is None:\n            if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):\n                c = self.cond_stage_model.encode(c)\n                if isinstance(c, DiagonalGaussianDistribution):\n                    c = c.mode()\n            else:\n                c = self.cond_stage_model(c)\n        else:\n            assert hasattr(self.cond_stage_model, self.cond_stage_forward)\n            c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)\n        return c\n\n    def get_first_stage_encoding(self, encoder_posterior, noise=None):\n        if isinstance(encoder_posterior, DiagonalGaussianDistribution):\n            z = encoder_posterior.sample(noise=noise)\n        elif isinstance(encoder_posterior, torch.Tensor):\n            z = encoder_posterior\n        else:\n            raise NotImplementedError(f\"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented\")\n        return self.scale_factor * z\n   \n    @torch.no_grad()\n    def encode_first_stage(self, x):\n        if self.encoder_type == \"2d\" and x.dim() == 5:\n            b, _, t, _, _ = x.shape\n            x = rearrange(x, 'b c t h w -> (b t) c h w')\n            reshape_back = True\n        else:\n            reshape_back = False\n        \n        ## consume more GPU memory but faster\n        if not self.perframe_ae:\n            encoder_posterior = self.first_stage_model.encode(x)\n            results = self.get_first_stage_encoding(encoder_posterior).detach()\n        else:  ## consume less GPU memory but slower\n            results = []\n            for index in range(x.shape[0]):\n                frame_batch = self.first_stage_model.encode(x[index:index+1,:,:,:])\n                frame_result = self.get_first_stage_encoding(frame_batch).detach()\n                results.append(frame_result)\n            results = torch.cat(results, dim=0)\n\n        if reshape_back:\n            results = rearrange(results, '(b t) c h w -> b c t h w', b=b,t=t)\n        \n        return results\n    \n    def decode_core(self, z, **kwargs):\n        if self.encoder_type == \"2d\" and z.dim() == 5:\n            b, _, t, _, _ = z.shape\n            z = rearrange(z, 'b c t h w -> (b t) c h w')\n            reshape_back = True\n        else:\n            reshape_back = False\n\n        z = 1. / self.scale_factor * z \n        if not self.perframe_ae: \n            results = self.first_stage_model.decode(z, **kwargs)\n        else:\n\n            results = []\n            \n            n_samples = default(self.en_and_decode_n_samples_a_time, self.temporal_length)\n            n_rounds = math.ceil(z.shape[0] / n_samples)\n            from contextlib import nullcontext\n            with torch.autocast(\"cuda\", enabled=True) if torch.cuda.is_available() else nullcontext():\n                for n in range(n_rounds):\n                    if isinstance(self.first_stage_model.decoder, VideoDecoder):\n                        kwargs.update({\"timesteps\": len(z[n * n_samples : (n + 1) * n_samples])})\n                    else:\n                        kwargs = {}\n                    if os.environ.get(\"TOON_MEM_STRATEGY\", \"none\") == \"low\":\n                        kwargs2 = kwargs.copy()\n                        kwargs2.update({\"timesteps\": 3})\n                        out_list = []\n                        for k in range(z.shape[0]):\n                            out = self.first_stage_model.decode(z[[0, k, z.shape[0] - 1]], **kwargs2)\n                            out_list.append(out[1:2])\n                        out = torch.cat(out_list, dim=0)\n                    else:\n                        out = self.first_stage_model.decode(\n                            z[n * n_samples : (n + 1) * n_samples], **kwargs\n                        )\n                    results.append(out)\n            results = torch.cat(results, dim=0)\n\n        if reshape_back:\n            results = rearrange(results, '(b t) c h w -> b c t h w', b=b,t=t)\n        return results\n\n    @torch.no_grad()\n    def decode_first_stage(self, z, **kwargs):\n        return self.decode_core(z, **kwargs)\n\n    # same as above but without decorator\n    def differentiable_decode_first_stage(self, z, **kwargs):\n        return self.decode_core(z, **kwargs)\n    \n    @torch.no_grad()\n    def get_batch_input(self, batch, random_uncond, return_first_stage_outputs=False, return_original_cond=False):\n        ## video shape: b, c, t, h, w\n        x = super().get_input(batch, self.first_stage_key)\n\n        ## encode video frames x to z via a 2D encoder\n        z = self.encode_first_stage(x)\n                \n        ## get caption condition\n        cond = batch[self.cond_stage_key]\n        if random_uncond and self.uncond_type == 'empty_seq':\n            for i, ci in enumerate(cond):\n                if random.random() < self.uncond_prob:\n                    cond[i] = \"\"\n        if isinstance(cond, dict) or isinstance(cond, list):\n            cond_emb = self.get_learned_conditioning(cond)\n        else:\n            cond_emb = self.get_learned_conditioning(cond.to(self.device))\n        if random_uncond and self.uncond_type == 'zero_embed':\n            for i, ci in enumerate(cond):\n                if random.random() < self.uncond_prob:\n                    cond_emb[i] = torch.zeros_like(cond_emb[i])\n        \n        out = [z, cond_emb]\n        ## optional output: self-reconst or caption\n        if return_first_stage_outputs:\n            xrec = self.decode_first_stage(z)\n            out.extend([xrec])\n\n        if return_original_cond:\n            out.append(cond)\n\n        return out\n\n    def forward(self, x, c, **kwargs):\n        t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()\n        if self.use_dynamic_rescale:\n            x = x * extract_into_tensor(self.scale_arr, t, x.shape)\n        return self.p_losses(x, c, t, **kwargs)\n\n    def shared_step(self, batch, random_uncond, **kwargs):\n        x, c = self.get_batch_input(batch, random_uncond=random_uncond)\n        loss, loss_dict = self(x, c, **kwargs)\n\n        return loss, loss_dict\n\n    def apply_model(self, x_noisy, t, cond, **kwargs):\n        if isinstance(cond, dict):\n            # hybrid case, cond is exptected to be a dict\n            pass\n        else:\n            if not isinstance(cond, list):\n                cond = [cond]\n            key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'\n            cond = {key: cond}\n\n        x_recon = self.model(x_noisy, t, **cond, **kwargs)\n\n        if isinstance(x_recon, tuple):\n            return x_recon[0]\n        else:\n            return x_recon\n\n    def p_losses(self, x_start, cond, t, noise=None, **kwargs):\n        if self.noise_strength > 0:\n            b, c, f, _, _ = x_start.shape\n            offset_noise = torch.randn(b, c, f, 1, 1, device=x_start.device)\n            noise = default(noise, lambda: torch.randn_like(x_start) + self.noise_strength * offset_noise)\n        else:\n            noise = default(noise, lambda: torch.randn_like(x_start))\n        x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)\n\n        model_output = self.apply_model(x_noisy, t, cond, **kwargs)\n\n        loss_dict = {}\n        prefix = 'train' if self.training else 'val'\n\n        if self.parameterization == \"x0\":\n            target = x_start\n        elif self.parameterization == \"eps\":\n            target = noise\n        elif self.parameterization == \"v\":\n            target = self.get_v(x_start, noise, t)\n        else:\n            raise NotImplementedError()\n        \n        loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3, 4])\n        loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})\n\n        if self.logvar.device is not self.device:\n            self.logvar = self.logvar.to(self.device)\n        logvar_t = self.logvar[t]\n        # logvar_t = self.logvar[t.item()].to(self.device) # device conflict when ddp shared\n        loss = loss_simple / torch.exp(logvar_t) + logvar_t\n        # loss = loss_simple / torch.exp(self.logvar) + self.logvar\n        if self.learn_logvar:\n            loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})\n            loss_dict.update({'logvar': self.logvar.data.mean()})\n\n        loss = self.l_simple_weight * loss.mean()\n\n        loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3, 4))\n        loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()\n        loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})\n        loss += (self.original_elbo_weight * loss_vlb)\n        loss_dict.update({f'{prefix}/loss': loss})\n\n        return loss, loss_dict  \n\n    def training_step(self, batch, batch_idx):\n        loss, loss_dict = self.shared_step(batch, random_uncond=self.classifier_free_guidance)\n        ## sync_dist | rank_zero_only \n        self.log_dict(loss_dict, prog_bar=True, logger=True, on_step=True, on_epoch=True, sync_dist=False)\n        #self.log(\"epoch/global_step\", self.global_step.float(), prog_bar=True, logger=True, on_step=True, on_epoch=False)\n        '''\n        if self.use_scheduler:\n            lr = self.optimizers().param_groups[0]['lr']\n            self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False, rank_zero_only=True)\n        '''\n        if (batch_idx+1) % self.log_every_t == 0:\n            mainlogger.info(f\"batch:{batch_idx}|epoch:{self.current_epoch} [globalstep:{self.global_step}]: loss={loss}\")\n        return loss\n    \n    def _get_denoise_row_from_list(self, samples, desc=''):\n        denoise_row = []\n        for zd in tqdm(samples, desc=desc):\n            denoise_row.append(self.decode_first_stage(zd.to(self.device)))\n        n_log_timesteps = len(denoise_row)\n\n        denoise_row = torch.stack(denoise_row)  # n_log_timesteps, b, C, H, W\n        \n        if denoise_row.dim() == 5:\n            denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')\n            denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')\n            denoise_grid = make_grid(denoise_grid, nrow=n_log_timesteps)\n        elif denoise_row.dim() == 6:\n            # video, grid_size=[n_log_timesteps*bs, t]\n            video_length = denoise_row.shape[3]\n            denoise_grid = rearrange(denoise_row, 'n b c t h w -> b n c t h w')\n            denoise_grid = rearrange(denoise_grid, 'b n c t h w -> (b n) c t h w')\n            denoise_grid = rearrange(denoise_grid, 'n c t h w -> (n t) c h w')\n            denoise_grid = make_grid(denoise_grid, nrow=video_length)\n        else:\n            raise ValueError\n\n        return denoise_grid\n\n    @torch.no_grad()\n    def log_images(self, batch, sample=True, ddim_steps=200, ddim_eta=1., plot_denoise_rows=False, \\\n                    unconditional_guidance_scale=1.0, **kwargs):\n        \"\"\" log images for LatentDiffusion \"\"\"\n        ##### control sampled imgae for logging, larger value may cause OOM\n        sampled_img_num = 2\n        for key in batch.keys():\n            batch[key] = batch[key][:sampled_img_num]\n\n        ## TBD: currently, classifier_free_guidance sampling is only supported by DDIM\n        use_ddim = ddim_steps is not None\n        log = dict()\n        z, c, xrec, xc = self.get_batch_input(batch, random_uncond=False,\n                                                return_first_stage_outputs=True,\n                                                return_original_cond=True)\n\n        N = xrec.shape[0]\n        log[\"reconst\"] = xrec\n        log[\"condition\"] = xc\n        \n\n        if sample:\n            # get uncond embedding for classifier-free guidance sampling\n            if unconditional_guidance_scale != 1.0:\n                if isinstance(c, dict):\n                    c_cat, c_emb = c[\"c_concat\"][0], c[\"c_crossattn\"][0]\n                    log[\"condition_cat\"] = c_cat\n                else:\n                    c_emb = c\n\n                if self.uncond_type == \"empty_seq\":\n                    prompts = N * [\"\"]\n                    uc = self.get_learned_conditioning(prompts)\n                elif self.uncond_type == \"zero_embed\":\n                    uc = torch.zeros_like(c_emb)\n                ## hybrid case\n                if isinstance(c, dict):\n                    uc_hybrid = {\"c_concat\": [c_cat], \"c_crossattn\": [uc]}\n                    uc = uc_hybrid\n            else:\n                uc = None\n\n            with self.ema_scope(\"Plotting\"):\n                samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                         ddim_steps=ddim_steps,eta=ddim_eta,\n                                                         unconditional_guidance_scale=unconditional_guidance_scale,\n                                                         unconditional_conditioning=uc, x0=z, **kwargs)\n            x_samples = self.decode_first_stage(samples)\n            log[\"samples\"] = x_samples\n            \n            if plot_denoise_rows:\n                denoise_grid = self._get_denoise_row_from_list(z_denoise_row)\n                log[\"denoise_row\"] = denoise_grid\n\n        return log\n\n    def p_mean_variance(self, x, c, t, clip_denoised: bool, return_x0=False, score_corrector=None, corrector_kwargs=None, **kwargs):\n        t_in = t\n        model_out = self.apply_model(x, t_in, c, **kwargs)\n\n        if score_corrector is not None:\n            assert self.parameterization == \"eps\"\n            model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)\n\n        if self.parameterization == \"eps\":\n            x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)\n        elif self.parameterization == \"x0\":\n            x_recon = model_out\n        else:\n            raise NotImplementedError()\n\n        if clip_denoised:\n            x_recon.clamp_(-1., 1.)\n\n        model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)\n\n        if return_x0:\n            return model_mean, posterior_variance, posterior_log_variance, x_recon\n        else:\n            return model_mean, posterior_variance, posterior_log_variance\n\n    @torch.no_grad()\n    def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, return_x0=False, \\\n                 temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, **kwargs):\n        b, *_, device = *x.shape, x.device\n        outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, return_x0=return_x0, \\\n                                       score_corrector=score_corrector, corrector_kwargs=corrector_kwargs, **kwargs)\n        if return_x0:\n            model_mean, _, model_log_variance, x0 = outputs\n        else:\n            model_mean, _, model_log_variance = outputs\n\n        noise = noise_like(x.shape, device, repeat_noise) * temperature\n        if noise_dropout > 0.:\n            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n        # no noise when t == 0\n        nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))\n\n        if return_x0:\n            return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0\n        else:\n            return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise\n\n    @torch.no_grad()\n    def p_sample_loop(self, cond, shape, return_intermediates=False, x_T=None, verbose=True, callback=None, \\\n                      timesteps=None, mask=None, x0=None, img_callback=None, start_T=None, log_every_t=None, **kwargs):\n\n        if not log_every_t:\n            log_every_t = self.log_every_t\n        device = self.betas.device\n        b = shape[0]        \n        # sample an initial noise\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n\n        intermediates = [img]\n        if timesteps is None:\n            timesteps = self.num_timesteps\n        if start_T is not None:\n            timesteps = min(timesteps, start_T)\n\n        iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(range(0, timesteps))\n\n        if mask is not None:\n            assert x0 is not None\n            assert x0.shape[2:3] == mask.shape[2:3]  # spatial size has to match\n\n        for i in iterator:\n            ts = torch.full((b,), i, device=device, dtype=torch.long)\n            if self.shorten_cond_schedule:\n                assert self.model.conditioning_key != 'hybrid'\n                tc = self.cond_ids[ts].to(cond.device)\n                cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))\n\n            img = self.p_sample(img, cond, ts, clip_denoised=self.clip_denoised, **kwargs)\n            if mask is not None:\n                img_orig = self.q_sample(x0, ts)\n                img = img_orig * mask + (1. - mask) * img\n\n            if i % log_every_t == 0 or i == timesteps - 1:\n                intermediates.append(img)\n            if callback: callback(i)\n            if img_callback: img_callback(img, i)\n\n        if return_intermediates:\n            return img, intermediates\n        return img\n\n    @torch.no_grad()\n    def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, \\\n               verbose=True, timesteps=None, mask=None, x0=None, shape=None, **kwargs):\n        if shape is None:\n            shape = (batch_size, self.channels, self.temporal_length, *self.image_size)\n        if cond is not None:\n            if isinstance(cond, dict):\n                cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else\n                list(map(lambda x: x[:batch_size], cond[key])) for key in cond}\n            else:\n                cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]\n        return self.p_sample_loop(cond,\n                                  shape,\n                                  return_intermediates=return_intermediates, x_T=x_T,\n                                  verbose=verbose, timesteps=timesteps,\n                                  mask=mask, x0=x0, **kwargs)\n\n    @torch.no_grad()\n    def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):\n        if ddim:\n            ddim_sampler = DDIMSampler(self)\n            shape = (self.channels, self.temporal_length, *self.image_size)\n            samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, shape, cond, verbose=False, **kwargs)\n\n        else:\n            samples, intermediates = self.sample(cond=cond, batch_size=batch_size, return_intermediates=True, **kwargs)\n\n        return samples, intermediates\n\n    def configure_schedulers(self, optimizer):\n        assert 'target' in self.scheduler_config\n        scheduler_name = self.scheduler_config.target.split('.')[-1]\n        interval = self.scheduler_config.interval\n        frequency = self.scheduler_config.frequency\n        if scheduler_name == \"LambdaLRScheduler\":\n            scheduler = instantiate_from_config(self.scheduler_config)\n            scheduler.start_step = self.global_step\n            lr_scheduler = {\n                            'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule),\n                            'interval': interval,\n                            'frequency': frequency\n            }\n        elif scheduler_name == \"CosineAnnealingLRScheduler\":\n            scheduler = instantiate_from_config(self.scheduler_config)\n            decay_steps = scheduler.decay_steps\n            last_step = -1 if self.global_step == 0 else scheduler.start_step\n            lr_scheduler = {\n                            'scheduler': CosineAnnealingLR(optimizer, T_max=decay_steps, last_epoch=last_step),\n                            'interval': interval,\n                            'frequency': frequency\n            }\n        else:\n            raise NotImplementedError\n        return lr_scheduler\n\nclass LatentVisualDiffusion(LatentDiffusion):\n    def __init__(self, img_cond_stage_config, image_proj_stage_config, freeze_embedder=True, image_proj_model_trainable=True, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.image_proj_model_trainable = image_proj_model_trainable\n        self._init_embedder(img_cond_stage_config, freeze_embedder)\n        self._init_img_ctx_projector(image_proj_stage_config, image_proj_model_trainable)\n\n    def _init_img_ctx_projector(self, config, trainable):\n        self.image_proj_model = instantiate_from_config(config)\n        if not trainable:\n            self.image_proj_model.eval()\n            self.image_proj_model.train = disabled_train\n            for param in self.image_proj_model.parameters():\n                param.requires_grad = False\n\n    def _init_embedder(self, config, freeze=True):\n        self.embedder = instantiate_from_config(config)\n        if freeze:\n            self.embedder.eval()\n            self.embedder.train = disabled_train\n            for param in self.embedder.parameters():\n                param.requires_grad = False\n\n    def shared_step(self, batch, random_uncond, **kwargs):\n        x, c, fs = self.get_batch_input(batch, random_uncond=random_uncond, return_fs=True)\n        kwargs.update({\"fs\": fs.long()})\n        loss, loss_dict = self(x, c, **kwargs)\n        return loss, loss_dict\n    \n    def get_batch_input(self, batch, random_uncond, return_first_stage_outputs=False, return_original_cond=False, return_fs=False, return_cond_frame=False, return_original_input=False, **kwargs):\n        ## x: b c t h w\n        x = super().get_input(batch, self.first_stage_key)\n        ## encode video frames x to z via a 2D encoder        \n        z = self.encode_first_stage(x)\n        \n        ## get caption condition\n        cond_input = batch[self.cond_stage_key]\n\n        if isinstance(cond_input, dict) or isinstance(cond_input, list):\n            cond_emb = self.get_learned_conditioning(cond_input)\n        else:\n            cond_emb = self.get_learned_conditioning(cond_input.to(self.device))\n                \n        cond = {}\n        ## to support classifier-free guidance, randomly drop out only text conditioning 5%, only image conditioning 5%, and both 5%.\n        if random_uncond:\n            random_num = torch.rand(x.size(0), device=x.device)\n        else:\n            random_num = torch.ones(x.size(0), device=x.device)  ## by doning so, we can get text embedding and complete img emb for inference\n        prompt_mask = rearrange(random_num < 2 * self.uncond_prob, \"n -> n 1 1\")\n        input_mask = 1 - rearrange((random_num >= self.uncond_prob).float() * (random_num < 3 * self.uncond_prob).float(), \"n -> n 1 1 1\")\n\n        null_prompt = self.get_learned_conditioning([\"\"])\n        prompt_imb = torch.where(prompt_mask, null_prompt, cond_emb.detach())\n\n        ## get conditioning frame\n        cond_frame_index = 0\n        if self.rand_cond_frame:\n            cond_frame_index = random.randint(0, self.model.diffusion_model.temporal_length-1)\n\n        img = x[:,:,cond_frame_index,...]\n        img = input_mask * img\n        ## img: b c h w\n        img_emb = self.embedder(img) ## b l c\n        img_emb = self.image_proj_model(img_emb)\n\n        if self.model.conditioning_key == 'hybrid':\n            ## simply repeat the cond_frame to match the seq_len of z\n            img_cat_cond = z[:,:,cond_frame_index,:,:]\n            img_cat_cond = img_cat_cond.unsqueeze(2)\n            img_cat_cond = repeat(img_cat_cond, 'b c t h w -> b c (repeat t) h w', repeat=z.shape[2])\n\n            cond[\"c_concat\"] = [img_cat_cond] # b c t h w\n        cond[\"c_crossattn\"] = [torch.cat([prompt_imb, img_emb], dim=1)] ## concat in the seq_len dim\n\n        out = [z, cond]\n        if return_first_stage_outputs:\n            xrec = self.decode_first_stage(z)\n            out.extend([xrec])\n\n        if return_original_cond:\n            out.append(cond_input)\n        if return_fs:\n            if self.fps_condition_type == 'fs':\n                fs = super().get_input(batch, 'frame_stride')\n            elif self.fps_condition_type == 'fps':\n                fs = super().get_input(batch, 'fps')\n            out.append(fs)\n        if return_cond_frame:\n            out.append(x[:,:,cond_frame_index,...].unsqueeze(2))\n        if return_original_input:\n            out.append(x)\n\n        return out\n\n    @torch.no_grad()\n    def log_images(self, batch, sample=True, ddim_steps=50, ddim_eta=1., plot_denoise_rows=False, \\\n                    unconditional_guidance_scale=1.0, mask=None, **kwargs):\n        \"\"\" log images for LatentVisualDiffusion \"\"\"\n        ##### sampled_img_num: control sampled imgae for logging, larger value may cause OOM\n        sampled_img_num = 1\n        for key in batch.keys():\n            batch[key] = batch[key][:sampled_img_num]\n\n        ## TBD: currently, classifier_free_guidance sampling is only supported by DDIM\n        use_ddim = ddim_steps is not None\n        log = dict()\n\n        z, c, xrec, xc, fs, cond_x = self.get_batch_input(batch, random_uncond=False,\n                                                return_first_stage_outputs=True,\n                                                return_original_cond=True,\n                                                return_fs=True,\n                                                return_cond_frame=True)\n\n        N = xrec.shape[0]\n        log[\"image_condition\"] = cond_x\n        log[\"reconst\"] = xrec\n        xc_with_fs = []\n        for idx, content in enumerate(xc):\n            xc_with_fs.append(content + '_fs=' + str(fs[idx].item()))\n        log[\"condition\"] = xc_with_fs\n        kwargs.update({\"fs\": fs.long()})\n\n        c_cat = None\n        if sample:\n            # get uncond embedding for classifier-free guidance sampling\n            if unconditional_guidance_scale != 1.0:\n                if isinstance(c, dict):\n                    c_emb = c[\"c_crossattn\"][0]\n                    if 'c_concat' in c.keys():\n                        c_cat = c[\"c_concat\"][0]\n                else:\n                    c_emb = c\n\n                if self.uncond_type == \"empty_seq\":\n                    prompts = N * [\"\"]\n                    uc_prompt = self.get_learned_conditioning(prompts)\n                elif self.uncond_type == \"zero_embed\":\n                    uc_prompt = torch.zeros_like(c_emb)\n                \n                img = torch.zeros_like(xrec[:,:,0]) ## b c h w\n                ## img: b c h w\n                img_emb = self.embedder(img) ## b l c\n                uc_img = self.image_proj_model(img_emb)\n\n                uc = torch.cat([uc_prompt, uc_img], dim=1)\n                ## hybrid case\n                if isinstance(c, dict):\n                    uc_hybrid = {\"c_concat\": [c_cat], \"c_crossattn\": [uc]}\n                    uc = uc_hybrid\n            else:\n                uc = None\n\n            with self.ema_scope(\"Plotting\"):\n                samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,\n                                                         ddim_steps=ddim_steps,eta=ddim_eta,\n                                                         unconditional_guidance_scale=unconditional_guidance_scale,\n                                                         unconditional_conditioning=uc, x0=z, **kwargs)\n            x_samples = self.decode_first_stage(samples)\n            log[\"samples\"] = x_samples\n            \n            if plot_denoise_rows:\n                denoise_grid = self._get_denoise_row_from_list(z_denoise_row)\n                log[\"denoise_row\"] = denoise_grid\n\n        return log\n\n    def configure_optimizers(self):\n        \"\"\" configure_optimizers for LatentDiffusion \"\"\"\n        lr = self.learning_rate\n\n        params = list(self.model.parameters())\n        mainlogger.info(f\"@Training [{len(params)}] Full Paramters.\")\n\n        if self.cond_stage_trainable:\n            params_cond_stage = [p for p in self.cond_stage_model.parameters() if p.requires_grad == True]\n            mainlogger.info(f\"@Training [{len(params_cond_stage)}] Paramters for Cond_stage_model.\")\n            params.extend(params_cond_stage)\n        \n        if self.image_proj_model_trainable:\n            mainlogger.info(f\"@Training [{len(list(self.image_proj_model.parameters()))}] Paramters for Image_proj_model.\")\n            params.extend(list(self.image_proj_model.parameters()))   \n\n        if self.learn_logvar:\n            mainlogger.info('Diffusion model optimizing logvar')\n            if isinstance(params[0], dict):\n                params.append({\"params\": [self.logvar]})\n            else:\n                params.append(self.logvar)\n\n        ## optimizer\n        optimizer = torch.optim.AdamW(params, lr=lr)\n\n        ## lr scheduler\n        if self.use_scheduler:\n            mainlogger.info(\"Setting up scheduler...\")\n            lr_scheduler = self.configure_schedulers(optimizer)\n            return [optimizer], [lr_scheduler]\n        \n        return optimizer\n\n\nclass DiffusionWrapper(pl.LightningModule):\n    def __init__(self, diff_model_config, conditioning_key):\n        super().__init__()\n        self.diffusion_model = instantiate_from_config(diff_model_config)\n        self.conditioning_key = conditioning_key\n\n    def forward(self, x, t, c_concat: list = None, c_crossattn: list = None,\n                c_adm=None, s=None, mask=None, **kwargs):\n        # temporal_context = fps is foNone\n        if self.conditioning_key is None:\n            out = self.diffusion_model(x, t)\n        elif self.conditioning_key == 'concat':\n            xc = torch.cat([x] + c_concat, dim=1)\n            out = self.diffusion_model(xc, t, **kwargs)\n        elif self.conditioning_key == 'crossattn':\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(x, t, context=cc, **kwargs)\n        elif self.conditioning_key == 'hybrid':\n            ## it is just right [b,c,t,h,w]: concatenate in channel dim\n            xc = torch.cat([x] + c_concat, dim=1)\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(xc, t, context=cc, **kwargs)\n        elif self.conditioning_key == 'resblockcond':\n            cc = c_crossattn[0]\n            out = self.diffusion_model(x, t, context=cc)\n        elif self.conditioning_key == 'adm':\n            cc = c_crossattn[0]\n            out = self.diffusion_model(x, t, y=cc)\n        elif self.conditioning_key == 'hybrid-adm':\n            assert c_adm is not None\n            xc = torch.cat([x] + c_concat, dim=1)\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(xc, t, context=cc, y=c_adm, **kwargs)\n        elif self.conditioning_key == 'hybrid-time':\n            assert s is not None\n            xc = torch.cat([x] + c_concat, dim=1)\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(xc, t, context=cc, s=s)\n        elif self.conditioning_key == 'concat-time-mask':\n            # assert s is not None\n            xc = torch.cat([x] + c_concat, dim=1)\n            out = self.diffusion_model(xc, t, context=None, s=s, mask=mask)\n        elif self.conditioning_key == 'concat-adm-mask':\n            # assert s is not None\n            if c_concat is not None:\n                xc = torch.cat([x] + c_concat, dim=1)\n            else:\n                xc = x\n            out = self.diffusion_model(xc, t, context=None, y=s, mask=mask)\n        elif self.conditioning_key == 'hybrid-adm-mask':\n            cc = torch.cat(c_crossattn, 1)\n            if c_concat is not None:\n                xc = torch.cat([x] + c_concat, dim=1)\n            else:\n                xc = x\n            out = self.diffusion_model(xc, t, context=cc, y=s, mask=mask)\n        elif self.conditioning_key == 'hybrid-time-adm': # adm means y, e.g., class index\n            # assert s is not None\n            assert c_adm is not None\n            xc = torch.cat([x] + c_concat, dim=1)\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(xc, t, context=cc, s=s, y=c_adm)\n        elif self.conditioning_key == 'crossattn-adm':\n            assert c_adm is not None\n            cc = torch.cat(c_crossattn, 1)\n            out = self.diffusion_model(x, t, context=cc, y=c_adm)\n        else:\n            raise NotImplementedError()\n\n        return out"
  },
  {
    "path": "ToonCrafter/lvdm/models/samplers/ddim.py",
    "content": "import numpy as np\nfrom tqdm import tqdm\nimport torch\nfrom lvdm.models.utils_diffusion import make_ddim_sampling_parameters, make_ddim_timesteps, rescale_noise_cfg\nfrom lvdm.common import noise_like\nfrom lvdm.common import extract_into_tensor\nimport copy\n\n\nclass DDIMSampler(object):\n    def __init__(self, model, schedule=\"linear\", **kwargs):\n        super().__init__()\n        self.model = model\n        self.ddpm_num_timesteps = model.num_timesteps\n        self.schedule = schedule\n        self.counter = 0\n\n    def register_buffer(self, name, attr):\n        if isinstance(attr, torch.Tensor):\n            if attr.device != torch.device(\"cuda\") and torch.cuda.is_available():\n                attr = attr.to(torch.device(\"cuda\"))\n        setattr(self, name, attr)\n\n    def make_schedule(self, ddim_num_steps, ddim_discretize=\"uniform\", ddim_eta=0., verbose=True):\n        self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,\n                                                  num_ddpm_timesteps=self.ddpm_num_timesteps, verbose=verbose)\n        alphas_cumprod = self.model.alphas_cumprod\n        assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'\n        def to_torch(x): return x.clone().detach().to(torch.float32).to(self.model.device)\n\n        if self.model.use_dynamic_rescale:\n            self.ddim_scale_arr = self.model.scale_arr[self.ddim_timesteps]\n            self.ddim_scale_arr_prev = torch.cat([self.ddim_scale_arr[0:1], self.ddim_scale_arr[:-1]])\n\n        self.register_buffer('betas', to_torch(self.model.betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))\n\n        # ddim sampling parameters\n        ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),\n                                                                                   ddim_timesteps=self.ddim_timesteps,\n                                                                                   eta=ddim_eta, verbose=verbose)\n        self.register_buffer('ddim_sigmas', ddim_sigmas)\n        self.register_buffer('ddim_alphas', ddim_alphas)\n        self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)\n        self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))\n        sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(\n            (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (\n                1 - self.alphas_cumprod / self.alphas_cumprod_prev))\n        self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)\n\n    @torch.no_grad()\n    def sample(self,\n               S,\n               batch_size,\n               shape,\n               conditioning=None,\n               callback=None,\n               normals_sequence=None,\n               img_callback=None,\n               quantize_x0=False,\n               eta=0.,\n               mask=None,\n               x0=None,\n               temperature=1.,\n               noise_dropout=0.,\n               score_corrector=None,\n               corrector_kwargs=None,\n               verbose=True,\n               schedule_verbose=False,\n               x_T=None,\n               log_every_t=100,\n               unconditional_guidance_scale=1.,\n               unconditional_conditioning=None,\n               precision=None,\n               fs=None,\n               timestep_spacing='uniform',  # uniform_trailing for starting from last timestep\n               guidance_rescale=0.0,\n               **kwargs\n               ):\n\n        # check condition bs\n        if conditioning is not None:\n            if isinstance(conditioning, dict):\n                try:\n                    cbs = conditioning[list(conditioning.keys())[0]].shape[0]\n                except BaseException:\n                    cbs = conditioning[list(conditioning.keys())[0]][0].shape[0]\n\n                if cbs != batch_size:\n                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n            else:\n                if conditioning.shape[0] != batch_size:\n                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n\n        self.make_schedule(ddim_num_steps=S, ddim_discretize=timestep_spacing, ddim_eta=eta, verbose=schedule_verbose)\n\n        # make shape\n        if len(shape) == 3:\n            C, H, W = shape\n            size = (batch_size, C, H, W)\n        elif len(shape) == 4:\n            C, T, H, W = shape\n            size = (batch_size, C, T, H, W)\n\n        samples, intermediates = self.ddim_sampling(conditioning, size,\n                                                    callback=callback,\n                                                    img_callback=img_callback,\n                                                    quantize_denoised=quantize_x0,\n                                                    mask=mask, x0=x0,\n                                                    ddim_use_original_steps=False,\n                                                    noise_dropout=noise_dropout,\n                                                    temperature=temperature,\n                                                    score_corrector=score_corrector,\n                                                    corrector_kwargs=corrector_kwargs,\n                                                    x_T=x_T,\n                                                    log_every_t=log_every_t,\n                                                    unconditional_guidance_scale=unconditional_guidance_scale,\n                                                    unconditional_conditioning=unconditional_conditioning,\n                                                    verbose=verbose,\n                                                    precision=precision,\n                                                    fs=fs,\n                                                    guidance_rescale=guidance_rescale,\n                                                    **kwargs)\n        return samples, intermediates\n\n    @torch.no_grad()\n    def ddim_sampling(self, cond, shape,\n                      x_T=None, ddim_use_original_steps=False,\n                      callback=None, timesteps=None, quantize_denoised=False,\n                      mask=None, x0=None, img_callback=None, log_every_t=100,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None, verbose=True, precision=None, fs=None, guidance_rescale=0.0,\n                      **kwargs):\n        device = self.model.betas.device\n        b = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n        if precision is not None:\n            if precision == 16:\n                img = img.to(dtype=torch.float16)\n\n        if timesteps is None:\n            timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps\n        elif timesteps is not None and not ddim_use_original_steps:\n            subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1\n            timesteps = self.ddim_timesteps[:subset_end]\n\n        intermediates = {'x_inter': [img], 'pred_x0': [img]}\n        time_range = reversed(range(0, timesteps)) if ddim_use_original_steps else np.flip(timesteps)\n        total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]\n        if verbose:\n            iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)\n        else:\n            iterator = time_range\n\n        clean_cond = kwargs.pop(\"clean_cond\", False)\n\n        # cond_copy, unconditional_conditioning_copy = copy.deepcopy(cond), copy.deepcopy(unconditional_conditioning)\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((b,), step, device=device, dtype=torch.long)\n\n            # use mask to blend noised original latent (img_orig) & new sampled latent (img)\n            if mask is not None:\n                assert x0 is not None\n                if clean_cond:\n                    img_orig = x0\n                else:\n                    img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass? <ddim inversion>\n                img = img_orig * mask + (1. - mask) * img  # keep original & modify use img\n\n            outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,\n                                      quantize_denoised=quantize_denoised, temperature=temperature,\n                                      noise_dropout=noise_dropout, score_corrector=score_corrector,\n                                      corrector_kwargs=corrector_kwargs,\n                                      unconditional_guidance_scale=unconditional_guidance_scale,\n                                      unconditional_conditioning=unconditional_conditioning,\n                                      mask=mask, x0=x0, fs=fs, guidance_rescale=guidance_rescale,\n                                      **kwargs)\n\n            img, pred_x0 = outs\n            if precision == 16:\n                img = img.to(dtype=torch.float16)\n            if callback:\n                callback(i)\n            if img_callback:\n                img_callback(pred_x0, i)\n\n            if index % log_every_t == 0 or index == total_steps - 1:\n                intermediates['x_inter'].append(img)\n                intermediates['pred_x0'].append(pred_x0)\n\n        return img, intermediates\n\n    @torch.no_grad()\n    def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None,\n                      uc_type=None, conditional_guidance_scale_temporal=None, mask=None, x0=None, guidance_rescale=0.0, **kwargs):\n        b, *_, device = *x.shape, x.device\n        if x.dim() == 5:\n            is_video = True\n        else:\n            is_video = False\n\n        if unconditional_conditioning is None or unconditional_guidance_scale == 1.:\n            model_output = self.model.apply_model(x, t, c, **kwargs)  # unet denoiser\n        else:\n            # do_classifier_free_guidance\n            if isinstance(c, torch.Tensor) or isinstance(c, dict):\n                e_t_cond = self.model.apply_model(x, t, c, **kwargs)\n                e_t_uncond = self.model.apply_model(x, t, unconditional_conditioning, **kwargs)\n            else:\n                raise NotImplementedError\n\n            model_output = e_t_uncond + unconditional_guidance_scale * (e_t_cond - e_t_uncond)\n\n            if guidance_rescale > 0.0:\n                model_output = rescale_noise_cfg(model_output, e_t_cond, guidance_rescale=guidance_rescale)\n\n        if self.model.parameterization == \"v\":\n            e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)\n        else:\n            e_t = model_output\n\n        if score_corrector is not None:\n            assert self.model.parameterization == \"eps\", 'not implemented'\n            e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)\n\n        alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas\n        alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev\n        sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas\n        # sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n        sigmas = self.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n        # select parameters corresponding to the currently considered timestep\n\n        if is_video:\n            size = (b, 1, 1, 1, 1)\n        else:\n            size = (b, 1, 1, 1)\n        a_t = torch.full(size, alphas[index], device=device)\n        a_prev = torch.full(size, alphas_prev[index], device=device)\n        sigma_t = torch.full(size, sigmas[index], device=device)\n        sqrt_one_minus_at = torch.full(size, sqrt_one_minus_alphas[index], device=device)\n\n        # current prediction for x_0\n        if self.model.parameterization != \"v\":\n            pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()\n        else:\n            pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)\n\n        if self.model.use_dynamic_rescale:\n            scale_t = torch.full(size, self.ddim_scale_arr[index], device=device)\n            prev_scale_t = torch.full(size, self.ddim_scale_arr_prev[index], device=device)\n            rescale = (prev_scale_t / scale_t)\n            pred_x0 *= rescale\n\n        if quantize_denoised:\n            pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)\n        # direction pointing to x_t\n        dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t\n\n        noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature\n        if noise_dropout > 0.:\n            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n\n        x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise\n\n        return x_prev, pred_x0\n\n    @torch.no_grad()\n    def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,\n               use_original_steps=False, callback=None):\n\n        timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps\n        timesteps = timesteps[:t_start]\n\n        time_range = np.flip(timesteps)\n        total_steps = timesteps.shape[0]\n        print(f\"Running DDIM Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='Decoding image', total=total_steps)\n        x_dec = x_latent\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)\n            x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,\n                                          unconditional_guidance_scale=unconditional_guidance_scale,\n                                          unconditional_conditioning=unconditional_conditioning)\n            if callback:\n                callback(i)\n        return x_dec\n\n    @torch.no_grad()\n    def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):\n        # fast, but does not allow for exact reconstruction\n        # t serves as an index to gather the correct alphas\n        if use_original_steps:\n            sqrt_alphas_cumprod = self.sqrt_alphas_cumprod\n            sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod\n        else:\n            sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)\n            sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas\n\n        if noise is None:\n            noise = torch.randn_like(x0)\n        return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +\n                extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)\n"
  },
  {
    "path": "ToonCrafter/lvdm/models/samplers/ddim_multiplecond.py",
    "content": "import numpy as np\nfrom tqdm import tqdm\nimport torch\nfrom lvdm.models.utils_diffusion import make_ddim_sampling_parameters, make_ddim_timesteps, rescale_noise_cfg\nfrom lvdm.common import noise_like\nfrom lvdm.common import extract_into_tensor\nimport copy\n\n\nclass DDIMSampler(object):\n    def __init__(self, model, schedule=\"linear\", **kwargs):\n        super().__init__()\n        self.model = model\n        self.ddpm_num_timesteps = model.num_timesteps\n        self.schedule = schedule\n        self.counter = 0\n\n    def register_buffer(self, name, attr):\n        if type(attr) == torch.Tensor:\n            if attr.device != torch.device(\"cuda\"):\n                attr = attr.to(torch.device(\"cuda\"))\n        setattr(self, name, attr)\n\n    def make_schedule(self, ddim_num_steps, ddim_discretize=\"uniform\", ddim_eta=0., verbose=True):\n        self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,\n                                                  num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)\n        alphas_cumprod = self.model.alphas_cumprod\n        assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'\n        to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)\n\n        if self.model.use_dynamic_rescale:\n            self.ddim_scale_arr = self.model.scale_arr[self.ddim_timesteps]\n            self.ddim_scale_arr_prev = torch.cat([self.ddim_scale_arr[0:1], self.ddim_scale_arr[:-1]])\n\n        self.register_buffer('betas', to_torch(self.model.betas))\n        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n        self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))\n        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))\n        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))\n\n        # ddim sampling parameters\n        ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),\n                                                                                   ddim_timesteps=self.ddim_timesteps,\n                                                                                   eta=ddim_eta,verbose=verbose)\n        self.register_buffer('ddim_sigmas', ddim_sigmas)\n        self.register_buffer('ddim_alphas', ddim_alphas)\n        self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)\n        self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))\n        sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(\n            (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (\n                        1 - self.alphas_cumprod / self.alphas_cumprod_prev))\n        self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)\n\n    @torch.no_grad()\n    def sample(self,\n               S,\n               batch_size,\n               shape,\n               conditioning=None,\n               callback=None,\n               normals_sequence=None,\n               img_callback=None,\n               quantize_x0=False,\n               eta=0.,\n               mask=None,\n               x0=None,\n               temperature=1.,\n               noise_dropout=0.,\n               score_corrector=None,\n               corrector_kwargs=None,\n               verbose=True,\n               schedule_verbose=False,\n               x_T=None,\n               log_every_t=100,\n               unconditional_guidance_scale=1.,\n               unconditional_conditioning=None,\n               precision=None,\n               fs=None,\n               timestep_spacing='uniform', #uniform_trailing for starting from last timestep\n               guidance_rescale=0.0,\n               # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...\n               **kwargs\n               ):\n        \n        # check condition bs\n        if conditioning is not None:\n            if isinstance(conditioning, dict):\n                try:\n                    cbs = conditioning[list(conditioning.keys())[0]].shape[0]\n                except:\n                    cbs = conditioning[list(conditioning.keys())[0]][0].shape[0]\n\n                if cbs != batch_size:\n                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n            else:\n                if conditioning.shape[0] != batch_size:\n                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n\n        # print('==> timestep_spacing: ', timestep_spacing, guidance_rescale)\n        self.make_schedule(ddim_num_steps=S, ddim_discretize=timestep_spacing, ddim_eta=eta, verbose=schedule_verbose)\n        \n        # make shape\n        if len(shape) == 3:\n            C, H, W = shape\n            size = (batch_size, C, H, W)\n        elif len(shape) == 4:\n            C, T, H, W = shape\n            size = (batch_size, C, T, H, W)\n        # print(f'Data shape for DDIM sampling is {size}, eta {eta}')\n        \n        samples, intermediates = self.ddim_sampling(conditioning, size,\n                                                    callback=callback,\n                                                    img_callback=img_callback,\n                                                    quantize_denoised=quantize_x0,\n                                                    mask=mask, x0=x0,\n                                                    ddim_use_original_steps=False,\n                                                    noise_dropout=noise_dropout,\n                                                    temperature=temperature,\n                                                    score_corrector=score_corrector,\n                                                    corrector_kwargs=corrector_kwargs,\n                                                    x_T=x_T,\n                                                    log_every_t=log_every_t,\n                                                    unconditional_guidance_scale=unconditional_guidance_scale,\n                                                    unconditional_conditioning=unconditional_conditioning,\n                                                    verbose=verbose,\n                                                    precision=precision,\n                                                    fs=fs,\n                                                    guidance_rescale=guidance_rescale,\n                                                    **kwargs)\n        return samples, intermediates\n\n    @torch.no_grad()\n    def ddim_sampling(self, cond, shape,\n                      x_T=None, ddim_use_original_steps=False,\n                      callback=None, timesteps=None, quantize_denoised=False,\n                      mask=None, x0=None, img_callback=None, log_every_t=100,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None, verbose=True,precision=None,fs=None,guidance_rescale=0.0,\n                      **kwargs):\n        device = self.model.betas.device        \n        b = shape[0]\n        if x_T is None:\n            img = torch.randn(shape, device=device)\n        else:\n            img = x_T\n        if precision is not None:\n            if precision == 16:\n                img = img.to(dtype=torch.float16)\n\n        \n        if timesteps is None:\n            timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps\n        elif timesteps is not None and not ddim_use_original_steps:\n            subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1\n            timesteps = self.ddim_timesteps[:subset_end]\n            \n        intermediates = {'x_inter': [img], 'pred_x0': [img]}\n        time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)\n        total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]\n        if verbose:\n            iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)\n        else:\n            iterator = time_range\n\n        clean_cond = kwargs.pop(\"clean_cond\", False)\n\n        # cond_copy, unconditional_conditioning_copy = copy.deepcopy(cond), copy.deepcopy(unconditional_conditioning)\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((b,), step, device=device, dtype=torch.long)\n\n            ## use mask to blend noised original latent (img_orig) & new sampled latent (img)\n            if mask is not None:\n                assert x0 is not None\n                if clean_cond:\n                    img_orig = x0\n                else:\n                    img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass? <ddim inversion>\n                img = img_orig * mask + (1. - mask) * img # keep original & modify use img\n\n\n\n\n            outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,\n                                      quantize_denoised=quantize_denoised, temperature=temperature,\n                                      noise_dropout=noise_dropout, score_corrector=score_corrector,\n                                      corrector_kwargs=corrector_kwargs,\n                                      unconditional_guidance_scale=unconditional_guidance_scale,\n                                      unconditional_conditioning=unconditional_conditioning,\n                                      mask=mask,x0=x0,fs=fs,guidance_rescale=guidance_rescale,\n                                      **kwargs)\n            \n\n\n            img, pred_x0 = outs\n            if callback: callback(i)\n            if img_callback: img_callback(pred_x0, i)\n\n            if index % log_every_t == 0 or index == total_steps - 1:\n                intermediates['x_inter'].append(img)\n                intermediates['pred_x0'].append(pred_x0)\n\n        return img, intermediates\n\n    @torch.no_grad()\n    def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,\n                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,\n                      unconditional_guidance_scale=1., unconditional_conditioning=None,\n                      uc_type=None, cfg_img=None,mask=None,x0=None,guidance_rescale=0.0, **kwargs):\n        b, *_, device = *x.shape, x.device\n        if x.dim() == 5:\n            is_video = True\n        else:\n            is_video = False\n        if cfg_img is None:\n            cfg_img = unconditional_guidance_scale\n\n        unconditional_conditioning_img_nonetext = kwargs['unconditional_conditioning_img_nonetext']\n\n        \n        if unconditional_conditioning is None or unconditional_guidance_scale == 1.:\n            model_output = self.model.apply_model(x, t, c, **kwargs) # unet denoiser\n        else:\n            ### with unconditional condition\n            e_t_cond = self.model.apply_model(x, t, c, **kwargs)\n            e_t_uncond = self.model.apply_model(x, t, unconditional_conditioning, **kwargs)\n            e_t_uncond_img = self.model.apply_model(x, t, unconditional_conditioning_img_nonetext, **kwargs)\n            # text cfg\n            model_output = e_t_uncond + cfg_img * (e_t_uncond_img - e_t_uncond) + unconditional_guidance_scale * (e_t_cond - e_t_uncond_img)\n            if guidance_rescale > 0.0:\n                model_output = rescale_noise_cfg(model_output, e_t_cond, guidance_rescale=guidance_rescale)\n        \n        if self.model.parameterization == \"v\":\n            e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)\n        else:\n            e_t = model_output\n\n        if score_corrector is not None:\n            assert self.model.parameterization == \"eps\", 'not implemented'\n            e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)\n\n        alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas\n        alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev\n        sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas\n        sigmas = self.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n        # select parameters corresponding to the currently considered timestep\n        \n        if is_video:\n            size = (b, 1, 1, 1, 1)\n        else:\n            size = (b, 1, 1, 1)\n        a_t = torch.full(size, alphas[index], device=device)\n        a_prev = torch.full(size, alphas_prev[index], device=device)\n        sigma_t = torch.full(size, sigmas[index], device=device)\n        sqrt_one_minus_at = torch.full(size, sqrt_one_minus_alphas[index],device=device)\n\n        # current prediction for x_0\n        if self.model.parameterization != \"v\":\n            pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()\n        else:\n            pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)\n        \n        if self.model.use_dynamic_rescale:\n            scale_t = torch.full(size, self.ddim_scale_arr[index], device=device)\n            prev_scale_t = torch.full(size, self.ddim_scale_arr_prev[index], device=device)\n            rescale = (prev_scale_t / scale_t)\n            pred_x0 *= rescale\n\n        if quantize_denoised:\n            pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)\n        # direction pointing to x_t\n        dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t\n\n        noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature\n        if noise_dropout > 0.:\n            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n    \n        x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise\n\n        return x_prev, pred_x0\n\n    @torch.no_grad()\n    def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,\n               use_original_steps=False, callback=None):\n\n        timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps\n        timesteps = timesteps[:t_start]\n\n        time_range = np.flip(timesteps)\n        total_steps = timesteps.shape[0]\n        print(f\"Running DDIM Sampling with {total_steps} timesteps\")\n\n        iterator = tqdm(time_range, desc='Decoding image', total=total_steps)\n        x_dec = x_latent\n        for i, step in enumerate(iterator):\n            index = total_steps - i - 1\n            ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)\n            x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,\n                                          unconditional_guidance_scale=unconditional_guidance_scale,\n                                          unconditional_conditioning=unconditional_conditioning)\n            if callback: callback(i)\n        return x_dec\n\n    @torch.no_grad()\n    def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):\n        # fast, but does not allow for exact reconstruction\n        # t serves as an index to gather the correct alphas\n        if use_original_steps:\n            sqrt_alphas_cumprod = self.sqrt_alphas_cumprod\n            sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod\n        else:\n            sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)\n            sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas\n\n        if noise is None:\n            noise = torch.randn_like(x0)\n        return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +\n                extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)"
  },
  {
    "path": "ToonCrafter/lvdm/models/utils_diffusion.py",
    "content": "import math\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom einops import repeat\n\n\ndef timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):\n    \"\"\"\n    Create sinusoidal timestep embeddings.\n    :param timesteps: a 1-D Tensor of N indices, one per batch element.\n                      These may be fractional.\n    :param dim: the dimension of the output.\n    :param max_period: controls the minimum frequency of the embeddings.\n    :return: an [N x dim] Tensor of positional embeddings.\n    \"\"\"\n    if not repeat_only:\n        half = dim // 2\n        freqs = torch.exp(\n            -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half\n        ).to(device=timesteps.device)\n        args = timesteps[:, None].float() * freqs[None]\n        embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)\n        if dim % 2:\n            embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)\n    else:\n        embedding = repeat(timesteps, 'b -> b d', d=dim)\n    return embedding\n\n\ndef make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):\n    if schedule == \"linear\":\n        betas = (\n                torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2\n        )\n\n    elif schedule == \"cosine\":\n        timesteps = (\n                torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s\n        )\n        alphas = timesteps / (1 + cosine_s) * np.pi / 2\n        alphas = torch.cos(alphas).pow(2)\n        alphas = alphas / alphas[0]\n        betas = 1 - alphas[1:] / alphas[:-1]\n        betas = np.clip(betas, a_min=0, a_max=0.999)\n\n    elif schedule == \"sqrt_linear\":\n        betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)\n    elif schedule == \"sqrt\":\n        betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5\n    else:\n        raise ValueError(f\"schedule '{schedule}' unknown.\")\n    return betas.numpy()\n\n\ndef make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):\n    if ddim_discr_method == 'uniform':\n        c = num_ddpm_timesteps // num_ddim_timesteps\n        ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))\n        steps_out = ddim_timesteps + 1\n    elif ddim_discr_method == 'uniform_trailing':\n        c = num_ddpm_timesteps / num_ddim_timesteps\n        ddim_timesteps = np.flip(np.round(np.arange(num_ddpm_timesteps, 0, -c))).astype(np.int64)\n        steps_out = ddim_timesteps - 1\n    elif ddim_discr_method == 'quad':\n        ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)\n        steps_out = ddim_timesteps + 1\n    else:\n        raise NotImplementedError(f'There is no ddim discretization method called \"{ddim_discr_method}\"')\n\n    # assert ddim_timesteps.shape[0] == num_ddim_timesteps\n    # add one to get the final alpha values right (the ones from first scale to data during sampling)\n    # steps_out = ddim_timesteps + 1\n    if verbose:\n        print(f'Selected timesteps for ddim sampler: {steps_out}')\n    return steps_out\n\n\ndef make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):\n    # select alphas for computing the variance schedule\n    # print(f'ddim_timesteps={ddim_timesteps}, len_alphacums={len(alphacums)}')\n    alphas = alphacums[ddim_timesteps]\n    alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())\n\n    # according the formula provided in https://arxiv.org/abs/2010.02502\n    sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))\n    if verbose:\n        print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')\n        print(f'For the chosen value of eta, which is {eta}, '\n              f'this results in the following sigma_t schedule for ddim sampler {sigmas}')\n    return sigmas, alphas, alphas_prev\n\n\ndef betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):\n    \"\"\"\n    Create a beta schedule that discretizes the given alpha_t_bar function,\n    which defines the cumulative product of (1-beta) over time from t = [0,1].\n    :param num_diffusion_timesteps: the number of betas to produce.\n    :param alpha_bar: a lambda that takes an argument t from 0 to 1 and\n                      produces the cumulative product of (1-beta) up to that\n                      part of the diffusion process.\n    :param max_beta: the maximum beta to use; use values lower than 1 to\n                     prevent singularities.\n    \"\"\"\n    betas = []\n    for i in range(num_diffusion_timesteps):\n        t1 = i / num_diffusion_timesteps\n        t2 = (i + 1) / num_diffusion_timesteps\n        betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))\n    return np.array(betas)\n\ndef rescale_zero_terminal_snr(betas):\n    \"\"\"\n    Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1)\n\n    Args:\n        betas (`numpy.ndarray`):\n            the betas that the scheduler is being initialized with.\n\n    Returns:\n        `numpy.ndarray`: rescaled betas with zero terminal SNR\n    \"\"\"\n    # Convert betas to alphas_bar_sqrt\n    alphas = 1.0 - betas\n    alphas_cumprod = np.cumprod(alphas, axis=0)\n    alphas_bar_sqrt = np.sqrt(alphas_cumprod)\n\n    # Store old values.\n    alphas_bar_sqrt_0 = alphas_bar_sqrt[0].copy()\n    alphas_bar_sqrt_T = alphas_bar_sqrt[-1].copy()\n\n    # Shift so the last timestep is zero.\n    alphas_bar_sqrt -= alphas_bar_sqrt_T\n\n    # Scale so the first timestep is back to the old value.\n    alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T)\n\n    # Convert alphas_bar_sqrt to betas\n    alphas_bar = alphas_bar_sqrt**2  # Revert sqrt\n    alphas = alphas_bar[1:] / alphas_bar[:-1]  # Revert cumprod\n    alphas = np.concatenate([alphas_bar[0:1], alphas])\n    betas = 1 - alphas\n\n    return betas\n\n\ndef rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):\n    \"\"\"\n    Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and\n    Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4\n    \"\"\"\n    std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)\n    std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)\n    # rescale the results from guidance (fixes overexposure)\n    noise_pred_rescaled = noise_cfg * (std_text / std_cfg)\n    # mix with the original results from guidance by factor guidance_rescale to avoid \"plain looking\" images\n    noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg\n    return noise_cfg"
  },
  {
    "path": "ToonCrafter/lvdm/modules/attention.py",
    "content": "import torch\nfrom torch import nn, einsum\nimport torch.nn.functional as F\nfrom einops import rearrange, repeat\nfrom functools import partial\ntry:\n    import xformers\n    import xformers.ops\n    XFORMERS_IS_AVAILABLE = True\nexcept:\n    XFORMERS_IS_AVAILABLE = False\nfrom lvdm.common import (\n    checkpoint,\n    exists,\n    default,\n)\nfrom lvdm.basics import zero_module\n\n\nclass RelativePosition(nn.Module):\n    \"\"\" https://github.com/evelinehong/Transformer_Relative_Position_PyTorch/blob/master/relative_position.py \"\"\"\n\n    def __init__(self, num_units, max_relative_position):\n        super().__init__()\n        self.num_units = num_units\n        self.max_relative_position = max_relative_position\n        self.embeddings_table = nn.Parameter(torch.Tensor(max_relative_position * 2 + 1, num_units))\n        nn.init.xavier_uniform_(self.embeddings_table)\n\n    def forward(self, length_q, length_k):\n        device = self.embeddings_table.device\n        range_vec_q = torch.arange(length_q, device=device)\n        range_vec_k = torch.arange(length_k, device=device)\n        distance_mat = range_vec_k[None, :] - range_vec_q[:, None]\n        distance_mat_clipped = torch.clamp(distance_mat, -self.max_relative_position, self.max_relative_position)\n        final_mat = distance_mat_clipped + self.max_relative_position\n        final_mat = final_mat.long()\n        embeddings = self.embeddings_table[final_mat]\n        return embeddings\n\n\nclass CrossAttention(nn.Module):\n\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0., \n                 relative_position=False, temporal_length=None, video_length=None, image_cross_attention=False, image_cross_attention_scale=1.0, image_cross_attention_scale_learnable=False, text_context_len=77):\n        super().__init__()\n        inner_dim = dim_head * heads\n        context_dim = default(context_dim, query_dim)\n\n        self.scale = dim_head**-0.5\n        self.heads = heads\n        self.dim_head = dim_head\n        self.to_q = nn.Linear(query_dim, inner_dim, bias=False)\n        self.to_k = nn.Linear(context_dim, inner_dim, bias=False)\n        self.to_v = nn.Linear(context_dim, inner_dim, bias=False)\n\n        self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout))\n        \n        self.relative_position = relative_position\n        if self.relative_position:\n            assert(temporal_length is not None)\n            self.relative_position_k = RelativePosition(num_units=dim_head, max_relative_position=temporal_length)\n            self.relative_position_v = RelativePosition(num_units=dim_head, max_relative_position=temporal_length)\n        else:\n            ## only used for spatial attention, while NOT for temporal attention\n            if XFORMERS_IS_AVAILABLE and temporal_length is None:\n                self.forward = self.efficient_forward\n\n        self.video_length = video_length\n        self.image_cross_attention = image_cross_attention\n        self.image_cross_attention_scale = image_cross_attention_scale\n        self.text_context_len = text_context_len\n        self.image_cross_attention_scale_learnable = image_cross_attention_scale_learnable\n        if self.image_cross_attention:\n            self.to_k_ip = nn.Linear(context_dim, inner_dim, bias=False)\n            self.to_v_ip = nn.Linear(context_dim, inner_dim, bias=False)\n            if image_cross_attention_scale_learnable:\n                self.register_parameter('alpha', nn.Parameter(torch.tensor(0.)) )\n\n\n    def forward(self, x, context=None, mask=None):\n        spatial_self_attn = (context is None)\n        k_ip, v_ip, out_ip = None, None, None\n\n        h = self.heads\n        q = self.to_q(x)\n        context = default(context, x)\n\n        if self.image_cross_attention and not spatial_self_attn:\n            context, context_image = context[:,:self.text_context_len,:], context[:,self.text_context_len:,:]\n            k = self.to_k(context)\n            v = self.to_v(context)\n            k_ip = self.to_k_ip(context_image)\n            v_ip = self.to_v_ip(context_image)\n        else:\n            if not spatial_self_attn:\n                context = context[:,:self.text_context_len,:]\n            k = self.to_k(context)\n            v = self.to_v(context)\n\n        q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))\n\n        sim = torch.einsum('b i d, b j d -> b i j', q, k) * self.scale\n        if self.relative_position:\n            len_q, len_k, len_v = q.shape[1], k.shape[1], v.shape[1]\n            k2 = self.relative_position_k(len_q, len_k)\n            sim2 = einsum('b t d, t s d -> b t s', q, k2) * self.scale # TODO check \n            sim += sim2\n        del k\n\n        if exists(mask):\n            ## feasible for causal attention mask only\n            max_neg_value = -torch.finfo(sim.dtype).max\n            mask = repeat(mask, 'b i j -> (b h) i j', h=h)\n            sim.masked_fill_(~(mask>0.5), max_neg_value)\n\n        # attention, what we cannot get enough of\n        sim = sim.softmax(dim=-1)\n\n        out = torch.einsum('b i j, b j d -> b i d', sim, v)\n        if self.relative_position:\n            v2 = self.relative_position_v(len_q, len_v)\n            out2 = einsum('b t s, t s d -> b t d', sim, v2) # TODO check\n            out += out2\n        out = rearrange(out, '(b h) n d -> b n (h d)', h=h)\n\n\n        ## for image cross-attention\n        if k_ip is not None:\n            k_ip, v_ip = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (k_ip, v_ip))\n            sim_ip =  torch.einsum('b i d, b j d -> b i j', q, k_ip) * self.scale\n            del k_ip\n            sim_ip = sim_ip.softmax(dim=-1)\n            out_ip = torch.einsum('b i j, b j d -> b i d', sim_ip, v_ip)\n            out_ip = rearrange(out_ip, '(b h) n d -> b n (h d)', h=h)\n\n\n        if out_ip is not None:\n            if self.image_cross_attention_scale_learnable:\n                out = out + self.image_cross_attention_scale * out_ip * (torch.tanh(self.alpha)+1)\n            else:\n                out = out + self.image_cross_attention_scale * out_ip\n        \n        return self.to_out(out)\n    \n    def efficient_forward(self, x, context=None, mask=None):\n        spatial_self_attn = (context is None)\n        k_ip, v_ip, out_ip = None, None, None\n\n        q = self.to_q(x)\n        context = default(context, x)\n\n        if self.image_cross_attention and not spatial_self_attn:\n            context, context_image = context[:,:self.text_context_len,:], context[:,self.text_context_len:,:]\n            k = self.to_k(context)\n            v = self.to_v(context)\n            k_ip = self.to_k_ip(context_image)\n            v_ip = self.to_v_ip(context_image)\n        else:\n            if not spatial_self_attn:\n                context = context[:,:self.text_context_len,:]\n            k = self.to_k(context)\n            v = self.to_v(context)\n\n        b, _, _ = q.shape\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, t.shape[1], self.heads, self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * self.heads, t.shape[1], self.dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n        # actually compute the attention, what we cannot get enough of\n        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None)\n        \n        ## for image cross-attention\n        if k_ip is not None:\n            k_ip, v_ip = map(\n                lambda t: t.unsqueeze(3)\n                .reshape(b, t.shape[1], self.heads, self.dim_head)\n                .permute(0, 2, 1, 3)\n                .reshape(b * self.heads, t.shape[1], self.dim_head)\n                .contiguous(),\n                (k_ip, v_ip),\n            )\n            out_ip = xformers.ops.memory_efficient_attention(q, k_ip, v_ip, attn_bias=None, op=None)\n            out_ip = (\n                out_ip.unsqueeze(0)\n                .reshape(b, self.heads, out.shape[1], self.dim_head)\n                .permute(0, 2, 1, 3)\n                .reshape(b, out.shape[1], self.heads * self.dim_head)\n            )\n\n        if exists(mask):\n            raise NotImplementedError\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, self.heads, out.shape[1], self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, out.shape[1], self.heads * self.dim_head)\n        )\n        if out_ip is not None:\n            if self.image_cross_attention_scale_learnable:\n                out = out + self.image_cross_attention_scale * out_ip * (torch.tanh(self.alpha)+1)\n            else:\n                out = out + self.image_cross_attention_scale * out_ip\n           \n        return self.to_out(out)\n\n\nclass BasicTransformerBlock(nn.Module):\n\n    def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True,\n                disable_self_attn=False, attention_cls=None, video_length=None, image_cross_attention=False, image_cross_attention_scale=1.0, image_cross_attention_scale_learnable=False, text_context_len=77):\n        super().__init__()\n        attn_cls = CrossAttention if attention_cls is None else attention_cls\n        self.disable_self_attn = disable_self_attn\n        self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout,\n            context_dim=context_dim if self.disable_self_attn else None)\n        self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)\n        self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, heads=n_heads, dim_head=d_head, dropout=dropout, video_length=video_length, image_cross_attention=image_cross_attention, image_cross_attention_scale=image_cross_attention_scale, image_cross_attention_scale_learnable=image_cross_attention_scale_learnable,text_context_len=text_context_len)\n        self.image_cross_attention = image_cross_attention\n\n        self.norm1 = nn.LayerNorm(dim)\n        self.norm2 = nn.LayerNorm(dim)\n        self.norm3 = nn.LayerNorm(dim)\n        self.checkpoint = checkpoint\n\n\n    def forward(self, x, context=None, mask=None, **kwargs):\n        ## implementation tricks: because checkpointing doesn't support non-tensor (e.g. None or scalar) arguments\n        input_tuple = (x,)      ## should not be (x), otherwise *input_tuple will decouple x into multiple arguments\n        if context is not None:\n            input_tuple = (x, context)\n        if mask is not None:\n            forward_mask = partial(self._forward, mask=mask)\n            return checkpoint(forward_mask, (x,), self.parameters(), self.checkpoint)\n        return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint)\n\n\n    def _forward(self, x, context=None, mask=None):\n        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None, mask=mask) + x\n        x = self.attn2(self.norm2(x), context=context, mask=mask) + x\n        x = self.ff(self.norm3(x)) + x\n        return x\n\n\nclass SpatialTransformer(nn.Module):\n    \"\"\"\n    Transformer block for image-like data in spatial axis.\n    First, project the input (aka embedding)\n    and reshape to b, t, d.\n    Then apply standard transformer action.\n    Finally, reshape to image\n    NEW: use_linear for more efficiency instead of the 1x1 convs\n    \"\"\"\n\n    def __init__(self, in_channels, n_heads, d_head, depth=1, dropout=0., context_dim=None,\n                 use_checkpoint=True, disable_self_attn=False, use_linear=False, video_length=None,\n                 image_cross_attention=False, image_cross_attention_scale_learnable=False):\n        super().__init__()\n        self.in_channels = in_channels\n        inner_dim = n_heads * d_head\n        self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)\n        if not use_linear:\n            self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)\n        else:\n            self.proj_in = nn.Linear(in_channels, inner_dim)\n\n        attention_cls = None\n        self.transformer_blocks = nn.ModuleList([\n            BasicTransformerBlock(\n                inner_dim,\n                n_heads,\n                d_head,\n                dropout=dropout,\n                context_dim=context_dim,\n                disable_self_attn=disable_self_attn,\n                checkpoint=use_checkpoint,\n                attention_cls=attention_cls,\n                video_length=video_length,\n                image_cross_attention=image_cross_attention,\n                image_cross_attention_scale_learnable=image_cross_attention_scale_learnable,\n                ) for d in range(depth)\n        ])\n        if not use_linear:\n            self.proj_out = zero_module(nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0))\n        else:\n            self.proj_out = zero_module(nn.Linear(inner_dim, in_channels))\n        self.use_linear = use_linear\n\n\n    def forward(self, x, context=None, **kwargs):\n        b, c, h, w = x.shape\n        x_in = x\n        x = self.norm(x)\n        if not self.use_linear:\n            x = self.proj_in(x)\n        x = rearrange(x, 'b c h w -> b (h w) c').contiguous()\n        if self.use_linear:\n            x = self.proj_in(x)\n        for i, block in enumerate(self.transformer_blocks):\n            x = block(x, context=context, **kwargs)\n        if self.use_linear:\n            x = self.proj_out(x)\n        x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous()\n        if not self.use_linear:\n            x = self.proj_out(x)\n        return x + x_in\n    \n    \nclass TemporalTransformer(nn.Module):\n    \"\"\"\n    Transformer block for image-like data in temporal axis.\n    First, reshape to b, t, d.\n    Then apply standard transformer action.\n    Finally, reshape to image\n    \"\"\"\n    def __init__(self, in_channels, n_heads, d_head, depth=1, dropout=0., context_dim=None,\n                 use_checkpoint=True, use_linear=False, only_self_att=True, causal_attention=False, causal_block_size=1,\n                 relative_position=False, temporal_length=None):\n        super().__init__()\n        self.only_self_att = only_self_att\n        self.relative_position = relative_position\n        self.causal_attention = causal_attention\n        self.causal_block_size = causal_block_size\n\n        self.in_channels = in_channels\n        inner_dim = n_heads * d_head\n        self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)\n        self.proj_in = nn.Conv1d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)\n        if not use_linear:\n            self.proj_in = nn.Conv1d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)\n        else:\n            self.proj_in = nn.Linear(in_channels, inner_dim)\n\n        if relative_position:\n            assert(temporal_length is not None)\n            attention_cls = partial(CrossAttention, relative_position=True, temporal_length=temporal_length)\n        else:\n            attention_cls = partial(CrossAttention, temporal_length=temporal_length)\n        if self.causal_attention:\n            assert(temporal_length is not None)\n            self.mask = torch.tril(torch.ones([1, temporal_length, temporal_length]))\n\n        if self.only_self_att:\n            context_dim = None\n        self.transformer_blocks = nn.ModuleList([\n            BasicTransformerBlock(\n                inner_dim,\n                n_heads,\n                d_head,\n                dropout=dropout,\n                context_dim=context_dim,\n                attention_cls=attention_cls,\n                checkpoint=use_checkpoint) for d in range(depth)\n        ])\n        if not use_linear:\n            self.proj_out = zero_module(nn.Conv1d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0))\n        else:\n            self.proj_out = zero_module(nn.Linear(inner_dim, in_channels))\n        self.use_linear = use_linear\n\n    def forward(self, x, context=None):\n        b, c, t, h, w = x.shape\n        x_in = x\n        x = self.norm(x)\n        x = rearrange(x, 'b c t h w -> (b h w) c t').contiguous()\n        if not self.use_linear:\n            x = self.proj_in(x)\n        x = rearrange(x, 'bhw c t -> bhw t c').contiguous()\n        if self.use_linear:\n            x = self.proj_in(x)\n\n        temp_mask = None\n        if self.causal_attention:\n            # slice the from mask map\n            temp_mask = self.mask[:,:t,:t].to(x.device)\n\n        if temp_mask is not None:\n            mask = temp_mask.to(x.device)\n            mask = repeat(mask, 'l i j -> (l bhw) i j', bhw=b*h*w)\n        else:\n            mask = None\n\n        if self.only_self_att:\n            ## note: if no context is given, cross-attention defaults to self-attention\n            for i, block in enumerate(self.transformer_blocks):\n                x = block(x, mask=mask)\n            x = rearrange(x, '(b hw) t c -> b hw t c', b=b).contiguous()\n        else:\n            x = rearrange(x, '(b hw) t c -> b hw t c', b=b).contiguous()\n            context = rearrange(context, '(b t) l con -> b t l con', t=t).contiguous()\n            for i, block in enumerate(self.transformer_blocks):\n                # calculate each batch one by one (since number in shape could not greater then 65,535 for some package)\n                for j in range(b):\n                    context_j = repeat(\n                        context[j],\n                        't l con -> (t r) l con', r=(h * w) // t, t=t).contiguous()\n                    ## note: causal mask will not applied in cross-attention case\n                    x[j] = block(x[j], context=context_j)\n        \n        if self.use_linear:\n            x = self.proj_out(x)\n            x = rearrange(x, 'b (h w) t c -> b c t h w', h=h, w=w).contiguous()\n        if not self.use_linear:\n            x = rearrange(x, 'b hw t c -> (b hw) c t').contiguous()\n            x = self.proj_out(x)\n            x = rearrange(x, '(b h w) c t -> b c t h w', b=b, h=h, w=w).contiguous()\n\n        return x + x_in\n    \n\nclass GEGLU(nn.Module):\n    def __init__(self, dim_in, dim_out):\n        super().__init__()\n        self.proj = nn.Linear(dim_in, dim_out * 2)\n\n    def forward(self, x):\n        x, gate = self.proj(x).chunk(2, dim=-1)\n        return x * F.gelu(gate)\n\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):\n        super().__init__()\n        inner_dim = int(dim * mult)\n        dim_out = default(dim_out, dim)\n        project_in = nn.Sequential(\n            nn.Linear(dim, inner_dim),\n            nn.GELU()\n        ) if not glu else GEGLU(dim, inner_dim)\n\n        self.net = nn.Sequential(\n            project_in,\n            nn.Dropout(dropout),\n            nn.Linear(inner_dim, dim_out)\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\n\nclass LinearAttention(nn.Module):\n    def __init__(self, dim, heads=4, dim_head=32):\n        super().__init__()\n        self.heads = heads\n        hidden_dim = dim_head * heads\n        self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False)\n        self.to_out = nn.Conv2d(hidden_dim, dim, 1)\n\n    def forward(self, x):\n        b, c, h, w = x.shape\n        qkv = self.to_qkv(x)\n        q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3)\n        k = k.softmax(dim=-1)  \n        context = torch.einsum('bhdn,bhen->bhde', k, v)\n        out = torch.einsum('bhde,bhdn->bhen', context, q)\n        out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)\n        return self.to_out(out)\n\n\nclass SpatialSelfAttention(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)\n        self.q = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.k = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.v = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.proj_out = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=1,\n                                        stride=1,\n                                        padding=0)\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        b,c,h,w = q.shape\n        q = rearrange(q, 'b c h w -> b (h w) c')\n        k = rearrange(k, 'b c h w -> b c (h w)')\n        w_ = torch.einsum('bij,bjk->bik', q, k)\n\n        w_ = w_ * (int(c)**(-0.5))\n        w_ = torch.nn.functional.softmax(w_, dim=2)\n\n        # attend to values\n        v = rearrange(v, 'b c h w -> b c (h w)')\n        w_ = rearrange(w_, 'b i j -> b j i')\n        h_ = torch.einsum('bij,bjk->bik', v, w_)\n        h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)\n        h_ = self.proj_out(h_)\n\n        return x+h_\n"
  },
  {
    "path": "ToonCrafter/lvdm/modules/attention_svd.py",
    "content": "import logging\nimport math\nfrom inspect import isfunction\nfrom typing import Any, Optional\n\nimport torch\nimport torch.nn.functional as F\nfrom einops import rearrange, repeat\nfrom packaging import version\nfrom torch import nn\nfrom torch.utils.checkpoint import checkpoint\n\nlogpy = logging.getLogger(__name__)\n\nif version.parse(torch.__version__) >= version.parse(\"2.0.0\"):\n    SDP_IS_AVAILABLE = True\n    from torch.backends.cuda import SDPBackend, sdp_kernel\n\n    BACKEND_MAP = {\n        SDPBackend.MATH: {\n            \"enable_math\": True,\n            \"enable_flash\": False,\n            \"enable_mem_efficient\": False,\n        },\n        SDPBackend.FLASH_ATTENTION: {\n            \"enable_math\": False,\n            \"enable_flash\": True,\n            \"enable_mem_efficient\": False,\n        },\n        SDPBackend.EFFICIENT_ATTENTION: {\n            \"enable_math\": False,\n            \"enable_flash\": False,\n            \"enable_mem_efficient\": True,\n        },\n        None: {\"enable_math\": True, \"enable_flash\": True, \"enable_mem_efficient\": True},\n    }\nelse:\n    from contextlib import nullcontext\n\n    SDP_IS_AVAILABLE = False\n    sdp_kernel = nullcontext\n    BACKEND_MAP = {}\n    logpy.warn(\n        f\"No SDP backend available, likely because you are running in pytorch \"\n        f\"versions < 2.0. In fact, you are using PyTorch {torch.__version__}. \"\n        f\"You might want to consider upgrading.\"\n    )\n\ntry:\n    import xformers\n    import xformers.ops\n\n    XFORMERS_IS_AVAILABLE = True\nexcept:\n    XFORMERS_IS_AVAILABLE = False\n    logpy.warn(\"no module 'xformers'. Processing without...\")\n\n# from .diffusionmodules.util import mixed_checkpoint as checkpoint\n\n\ndef exists(val):\n    return val is not None\n\n\ndef uniq(arr):\n    return {el: True for el in arr}.keys()\n\n\ndef default(val, d):\n    if exists(val):\n        return val\n    return d() if isfunction(d) else d\n\n\ndef max_neg_value(t):\n    return -torch.finfo(t.dtype).max\n\n\ndef init_(tensor):\n    dim = tensor.shape[-1]\n    std = 1 / math.sqrt(dim)\n    tensor.uniform_(-std, std)\n    return tensor\n\n\n# feedforward\nclass GEGLU(nn.Module):\n    def __init__(self, dim_in, dim_out):\n        super().__init__()\n        self.proj = nn.Linear(dim_in, dim_out * 2)\n\n    def forward(self, x):\n        x, gate = self.proj(x).chunk(2, dim=-1)\n        return x * F.gelu(gate)\n\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.0):\n        super().__init__()\n        inner_dim = int(dim * mult)\n        dim_out = default(dim_out, dim)\n        project_in = (\n            nn.Sequential(nn.Linear(dim, inner_dim), nn.GELU())\n            if not glu\n            else GEGLU(dim, inner_dim)\n        )\n\n        self.net = nn.Sequential(\n            project_in, nn.Dropout(dropout), nn.Linear(inner_dim, dim_out)\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef Normalize(in_channels):\n    return torch.nn.GroupNorm(\n        num_groups=32, num_channels=in_channels, eps=1e-6, affine=True\n    )\n\n\nclass LinearAttention(nn.Module):\n    def __init__(self, dim, heads=4, dim_head=32):\n        super().__init__()\n        self.heads = heads\n        hidden_dim = dim_head * heads\n        self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False)\n        self.to_out = nn.Conv2d(hidden_dim, dim, 1)\n\n    def forward(self, x):\n        b, c, h, w = x.shape\n        qkv = self.to_qkv(x)\n        q, k, v = rearrange(\n            qkv, \"b (qkv heads c) h w -> qkv b heads c (h w)\", heads=self.heads, qkv=3\n        )\n        k = k.softmax(dim=-1)\n        context = torch.einsum(\"bhdn,bhen->bhde\", k, v)\n        out = torch.einsum(\"bhde,bhdn->bhen\", context, q)\n        out = rearrange(\n            out, \"b heads c (h w) -> b (heads c) h w\", heads=self.heads, h=h, w=w\n        )\n        return self.to_out(out)\n\n\nclass SelfAttention(nn.Module):\n    ATTENTION_MODES = (\"xformers\", \"torch\", \"math\")\n\n    def __init__(\n        self,\n        dim: int,\n        num_heads: int = 8,\n        qkv_bias: bool = False,\n        qk_scale: Optional[float] = None,\n        attn_drop: float = 0.0,\n        proj_drop: float = 0.0,\n        attn_mode: str = \"xformers\",\n    ):\n        if attn_mode == \"xformers\" and XFORMERS_IS_AVAILABLE is False:\n            attn_mode = \"torch\"\n            logpy.warn(\"no module 'xformers'. Processing without...\")\n        super().__init__()\n        self.num_heads = num_heads\n        head_dim = dim // num_heads\n        self.scale = qk_scale or head_dim**-0.5\n\n        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)\n        self.attn_drop = nn.Dropout(attn_drop)\n        self.proj = nn.Linear(dim, dim)\n        self.proj_drop = nn.Dropout(proj_drop)\n        assert attn_mode in self.ATTENTION_MODES\n        self.attn_mode = attn_mode\n\n    def forward(self, x: torch.Tensor) -> torch.Tensor:\n        B, L, C = x.shape\n\n        qkv = self.qkv(x)\n        if self.attn_mode == \"torch\":\n            qkv = rearrange(qkv, \"B L (K H D) -> K B H L D\", K=3, H=self.num_heads).float()\n            q, k, v = qkv[0], qkv[1], qkv[2]  # B H L D\n            x = torch.nn.functional.scaled_dot_product_attention(q, k, v)\n            x = rearrange(x, \"B H L D -> B L (H D)\")\n        elif self.attn_mode == \"xformers\":\n            qkv = rearrange(qkv, \"B L (K H D) -> K B L H D\", K=3, H=self.num_heads)\n            q, k, v = qkv[0], qkv[1], qkv[2]  # B L H D\n            x = xformers.ops.memory_efficient_attention(q, k, v)\n            x = rearrange(x, \"B L H D -> B L (H D)\", H=self.num_heads)\n        elif self.attn_mode == \"math\":\n            qkv = rearrange(qkv, \"B L (K H D) -> K B H L D\", K=3, H=self.num_heads)\n            q, k, v = qkv[0], qkv[1], qkv[2]  # B H L D\n            attn = (q @ k.transpose(-2, -1)) * self.scale\n            attn = attn.softmax(dim=-1)\n            attn = self.attn_drop(attn)\n            x = (attn @ v).transpose(1, 2).reshape(B, L, C)\n        else:\n            raise NotImplemented\n\n        x = self.proj(x)\n        x = self.proj_drop(x)\n        return x\n\n\nclass SpatialSelfAttention(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.k = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.v = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n        self.proj_out = torch.nn.Conv2d(\n            in_channels, in_channels, kernel_size=1, stride=1, padding=0\n        )\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        b, c, h, w = q.shape\n        q = rearrange(q, \"b c h w -> b (h w) c\")\n        k = rearrange(k, \"b c h w -> b c (h w)\")\n        w_ = torch.einsum(\"bij,bjk->bik\", q, k)\n\n        w_ = w_ * (int(c) ** (-0.5))\n        w_ = torch.nn.functional.softmax(w_, dim=2)\n\n        # attend to values\n        v = rearrange(v, \"b c h w -> b c (h w)\")\n        w_ = rearrange(w_, \"b i j -> b j i\")\n        h_ = torch.einsum(\"bij,bjk->bik\", v, w_)\n        h_ = rearrange(h_, \"b c (h w) -> b c h w\", h=h)\n        h_ = self.proj_out(h_)\n\n        return x + h_\n\n\nclass CrossAttention(nn.Module):\n    def __init__(\n        self,\n        query_dim,\n        context_dim=None,\n        heads=8,\n        dim_head=64,\n        dropout=0.0,\n        backend=None,\n    ):\n        super().__init__()\n        inner_dim = dim_head * heads\n        context_dim = default(context_dim, query_dim)\n\n        self.scale = dim_head**-0.5\n        self.heads = heads\n\n        self.to_q = nn.Linear(query_dim, inner_dim, bias=False)\n        self.to_k = nn.Linear(context_dim, inner_dim, bias=False)\n        self.to_v = nn.Linear(context_dim, inner_dim, bias=False)\n\n        self.to_out = nn.Sequential(\n            nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)\n        )\n        self.backend = backend\n\n    def forward(\n        self,\n        x,\n        context=None,\n        mask=None,\n        additional_tokens=None,\n        n_times_crossframe_attn_in_self=0,\n    ):\n        h = self.heads\n\n        if additional_tokens is not None:\n            # get the number of masked tokens at the beginning of the output sequence\n            n_tokens_to_mask = additional_tokens.shape[1]\n            # add additional token\n            x = torch.cat([additional_tokens, x], dim=1)\n\n        q = self.to_q(x)\n        context = default(context, x)\n        k = self.to_k(context)\n        v = self.to_v(context)\n\n        if n_times_crossframe_attn_in_self:\n            # reprogramming cross-frame attention as in https://arxiv.org/abs/2303.13439\n            assert x.shape[0] % n_times_crossframe_attn_in_self == 0\n            n_cp = x.shape[0] // n_times_crossframe_attn_in_self\n            k = repeat(\n                k[::n_times_crossframe_attn_in_self], \"b ... -> (b n) ...\", n=n_cp\n            )\n            v = repeat(\n                v[::n_times_crossframe_attn_in_self], \"b ... -> (b n) ...\", n=n_cp\n            )\n\n        q, k, v = map(lambda t: rearrange(t, \"b n (h d) -> b h n d\", h=h), (q, k, v))\n\n        ## old\n        \"\"\"\n        sim = einsum('b i d, b j d -> b i j', q, k) * self.scale\n        del q, k\n\n        if exists(mask):\n            mask = rearrange(mask, 'b ... -> b (...)')\n            max_neg_value = -torch.finfo(sim.dtype).max\n            mask = repeat(mask, 'b j -> (b h) () j', h=h)\n            sim.masked_fill_(~mask, max_neg_value)\n\n        # attention, what we cannot get enough of\n        sim = sim.softmax(dim=-1)\n\n        out = einsum('b i j, b j d -> b i d', sim, v)\n        \"\"\"\n        ## new\n        with sdp_kernel(**BACKEND_MAP[self.backend]):\n            # print(\"dispatching into backend\", self.backend, \"q/k/v shape: \", q.shape, k.shape, v.shape)\n            out = F.scaled_dot_product_attention(\n                q, k, v, attn_mask=mask\n            )  # scale is dim_head ** -0.5 per default\n\n        del q, k, v\n        out = rearrange(out, \"b h n d -> b n (h d)\", h=h)\n\n        if additional_tokens is not None:\n            # remove additional token\n            out = out[:, n_tokens_to_mask:]\n        return self.to_out(out)\n\n\nclass MemoryEfficientCrossAttention(nn.Module):\n    # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223\n    def __init__(\n        self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0, **kwargs\n    ):\n        super().__init__()\n        logpy.debug(\n            f\"Setting up {self.__class__.__name__}. Query dim is {query_dim}, \"\n            f\"context_dim is {context_dim} and using {heads} heads with a \"\n            f\"dimension of {dim_head}.\"\n        )\n        inner_dim = dim_head * heads\n        context_dim = default(context_dim, query_dim)\n\n        self.heads = heads\n        self.dim_head = dim_head\n\n        self.to_q = nn.Linear(query_dim, inner_dim, bias=False)\n        self.to_k = nn.Linear(context_dim, inner_dim, bias=False)\n        self.to_v = nn.Linear(context_dim, inner_dim, bias=False)\n\n        self.to_out = nn.Sequential(\n            nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)\n        )\n        self.attention_op: Optional[Any] = None\n\n    def forward(\n        self,\n        x,\n        context=None,\n        mask=None,\n        additional_tokens=None,\n        n_times_crossframe_attn_in_self=0,\n    ):\n        if additional_tokens is not None:\n            # get the number of masked tokens at the beginning of the output sequence\n            n_tokens_to_mask = additional_tokens.shape[1]\n            # add additional token\n            x = torch.cat([additional_tokens, x], dim=1)\n        q = self.to_q(x)\n        context = default(context, x)\n        k = self.to_k(context)\n        v = self.to_v(context)\n\n        if n_times_crossframe_attn_in_self:\n            # reprogramming cross-frame attention as in https://arxiv.org/abs/2303.13439\n            assert x.shape[0] % n_times_crossframe_attn_in_self == 0\n            # n_cp = x.shape[0]//n_times_crossframe_attn_in_self\n            k = repeat(\n                k[::n_times_crossframe_attn_in_self],\n                \"b ... -> (b n) ...\",\n                n=n_times_crossframe_attn_in_self,\n            )\n            v = repeat(\n                v[::n_times_crossframe_attn_in_self],\n                \"b ... -> (b n) ...\",\n                n=n_times_crossframe_attn_in_self,\n            )\n\n        b, _, _ = q.shape\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, t.shape[1], self.heads, self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * self.heads, t.shape[1], self.dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n\n        # actually compute the attention, what we cannot get enough of\n        if version.parse(xformers.__version__) >= version.parse(\"0.0.21\"):\n            # NOTE: workaround for\n            # https://github.com/facebookresearch/xformers/issues/845\n            max_bs = 32768\n            N = q.shape[0]\n            n_batches = math.ceil(N / max_bs)\n            out = list()\n            for i_batch in range(n_batches):\n                batch = slice(i_batch * max_bs, (i_batch + 1) * max_bs)\n                out.append(\n                    xformers.ops.memory_efficient_attention(\n                        q[batch],\n                        k[batch],\n                        v[batch],\n                        attn_bias=None,\n                        op=self.attention_op,\n                    )\n                )\n            out = torch.cat(out, 0)\n        else:\n            out = xformers.ops.memory_efficient_attention(\n                q, k, v, attn_bias=None, op=self.attention_op\n            )\n\n        # TODO: Use this directly in the attention operation, as a bias\n        if exists(mask):\n            raise NotImplementedError\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, self.heads, out.shape[1], self.dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, out.shape[1], self.heads * self.dim_head)\n        )\n        if additional_tokens is not None:\n            # remove additional token\n            out = out[:, n_tokens_to_mask:]\n        return self.to_out(out)\n\n\nclass BasicTransformerBlock(nn.Module):\n    ATTENTION_MODES = {\n        \"softmax\": CrossAttention,  # vanilla attention\n        \"softmax-xformers\": MemoryEfficientCrossAttention,  # ampere\n    }\n\n    def __init__(\n        self,\n        dim,\n        n_heads,\n        d_head,\n        dropout=0.0,\n        context_dim=None,\n        gated_ff=True,\n        checkpoint=True,\n        disable_self_attn=False,\n        attn_mode=\"softmax\",\n        sdp_backend=None,\n    ):\n        super().__init__()\n        assert attn_mode in self.ATTENTION_MODES\n        if attn_mode != \"softmax\" and not XFORMERS_IS_AVAILABLE:\n            logpy.warn(\n                f\"Attention mode '{attn_mode}' is not available. Falling \"\n                f\"back to native attention. This is not a problem in \"\n                f\"Pytorch >= 2.0. FYI, you are running with PyTorch \"\n                f\"version {torch.__version__}.\"\n            )\n            attn_mode = \"softmax\"\n        elif attn_mode == \"softmax\" and not SDP_IS_AVAILABLE:\n            logpy.warn(\n                \"We do not support vanilla attention anymore, as it is too \"\n                \"expensive. Sorry.\"\n            )\n            if not XFORMERS_IS_AVAILABLE:\n                assert (\n                    False\n                ), \"Please install xformers via e.g. 'pip install xformers==0.0.16'\"\n            else:\n                logpy.info(\"Falling back to xformers efficient attention.\")\n                attn_mode = \"softmax-xformers\"\n        attn_cls = self.ATTENTION_MODES[attn_mode]\n        if version.parse(torch.__version__) >= version.parse(\"2.0.0\"):\n            assert sdp_backend is None or isinstance(sdp_backend, SDPBackend)\n        else:\n            assert sdp_backend is None\n        self.disable_self_attn = disable_self_attn\n        self.attn1 = attn_cls(\n            query_dim=dim,\n            heads=n_heads,\n            dim_head=d_head,\n            dropout=dropout,\n            context_dim=context_dim if self.disable_self_attn else None,\n            backend=sdp_backend,\n        )  # is a self-attention if not self.disable_self_attn\n        self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)\n        self.attn2 = attn_cls(\n            query_dim=dim,\n            context_dim=context_dim,\n            heads=n_heads,\n            dim_head=d_head,\n            dropout=dropout,\n            backend=sdp_backend,\n        )  # is self-attn if context is none\n        self.norm1 = nn.LayerNorm(dim)\n        self.norm2 = nn.LayerNorm(dim)\n        self.norm3 = nn.LayerNorm(dim)\n        self.checkpoint = checkpoint\n        if self.checkpoint:\n            logpy.debug(f\"{self.__class__.__name__} is using checkpointing\")\n\n    def forward(\n        self, x, context=None, additional_tokens=None, n_times_crossframe_attn_in_self=0\n    ):\n        kwargs = {\"x\": x}\n\n        if context is not None:\n            kwargs.update({\"context\": context})\n\n        if additional_tokens is not None:\n            kwargs.update({\"additional_tokens\": additional_tokens})\n\n        if n_times_crossframe_attn_in_self:\n            kwargs.update(\n                {\"n_times_crossframe_attn_in_self\": n_times_crossframe_attn_in_self}\n            )\n\n        # return mixed_checkpoint(self._forward, kwargs, self.parameters(), self.checkpoint)\n        if self.checkpoint:\n            # inputs = {\"x\": x, \"context\": context}\n            return checkpoint(self._forward, x, context)\n            # return checkpoint(self._forward, inputs, self.parameters(), self.checkpoint)\n        else:\n            return self._forward(**kwargs)\n\n    def _forward(\n        self, x, context=None, additional_tokens=None, n_times_crossframe_attn_in_self=0\n    ):\n        x = (\n            self.attn1(\n                self.norm1(x),\n                context=context if self.disable_self_attn else None,\n                additional_tokens=additional_tokens,\n                n_times_crossframe_attn_in_self=n_times_crossframe_attn_in_self\n                if not self.disable_self_attn\n                else 0,\n            )\n            + x\n        )\n        x = (\n            self.attn2(\n                self.norm2(x), context=context, additional_tokens=additional_tokens\n            )\n            + x\n        )\n        x = self.ff(self.norm3(x)) + x\n        return x\n\n\nclass BasicTransformerSingleLayerBlock(nn.Module):\n    ATTENTION_MODES = {\n        \"softmax\": CrossAttention,  # vanilla attention\n        \"softmax-xformers\": MemoryEfficientCrossAttention  # on the A100s not quite as fast as the above version\n        # (todo might depend on head_dim, check, falls back to semi-optimized kernels for dim!=[16,32,64,128])\n    }\n\n    def __init__(\n        self,\n        dim,\n        n_heads,\n        d_head,\n        dropout=0.0,\n        context_dim=None,\n        gated_ff=True,\n        checkpoint=True,\n        attn_mode=\"softmax\",\n    ):\n        super().__init__()\n        assert attn_mode in self.ATTENTION_MODES\n        attn_cls = self.ATTENTION_MODES[attn_mode]\n        self.attn1 = attn_cls(\n            query_dim=dim,\n            heads=n_heads,\n            dim_head=d_head,\n            dropout=dropout,\n            context_dim=context_dim,\n        )\n        self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)\n        self.norm1 = nn.LayerNorm(dim)\n        self.norm2 = nn.LayerNorm(dim)\n        self.checkpoint = checkpoint\n\n    def forward(self, x, context=None):\n        # inputs = {\"x\": x, \"context\": context}\n        # return checkpoint(self._forward, inputs, self.parameters(), self.checkpoint)\n        return checkpoint(self._forward, x, context)\n\n    def _forward(self, x, context=None):\n        x = self.attn1(self.norm1(x), context=context) + x\n        x = self.ff(self.norm2(x)) + x\n        return x\n\n\nclass SpatialTransformer(nn.Module):\n    \"\"\"\n    Transformer block for image-like data.\n    First, project the input (aka embedding)\n    and reshape to b, t, d.\n    Then apply standard transformer action.\n    Finally, reshape to image\n    NEW: use_linear for more efficiency instead of the 1x1 convs\n    \"\"\"\n\n    def __init__(\n        self,\n        in_channels,\n        n_heads,\n        d_head,\n        depth=1,\n        dropout=0.0,\n        context_dim=None,\n        disable_self_attn=False,\n        use_linear=False,\n        attn_type=\"softmax\",\n        use_checkpoint=True,\n        # sdp_backend=SDPBackend.FLASH_ATTENTION\n        sdp_backend=None,\n    ):\n        super().__init__()\n        logpy.debug(\n            f\"constructing {self.__class__.__name__} of depth {depth} w/ \"\n            f\"{in_channels} channels and {n_heads} heads.\"\n        )\n\n        if exists(context_dim) and not isinstance(context_dim, list):\n            context_dim = [context_dim]\n        if exists(context_dim) and isinstance(context_dim, list):\n            if depth != len(context_dim):\n                logpy.warn(\n                    f\"{self.__class__.__name__}: Found context dims \"\n                    f\"{context_dim} of depth {len(context_dim)}, which does not \"\n                    f\"match the specified 'depth' of {depth}. Setting context_dim \"\n                    f\"to {depth * [context_dim[0]]} now.\"\n                )\n                # depth does not match context dims.\n                assert all(\n                    map(lambda x: x == context_dim[0], context_dim)\n                ), \"need homogenous context_dim to match depth automatically\"\n                context_dim = depth * [context_dim[0]]\n        elif context_dim is None:\n            context_dim = [None] * depth\n        self.in_channels = in_channels\n        inner_dim = n_heads * d_head\n        self.norm = Normalize(in_channels)\n        if not use_linear:\n            self.proj_in = nn.Conv2d(\n                in_channels, inner_dim, kernel_size=1, stride=1, padding=0\n            )\n        else:\n            self.proj_in = nn.Linear(in_channels, inner_dim)\n\n        self.transformer_blocks = nn.ModuleList(\n            [\n                BasicTransformerBlock(\n                    inner_dim,\n                    n_heads,\n                    d_head,\n                    dropout=dropout,\n                    context_dim=context_dim[d],\n                    disable_self_attn=disable_self_attn,\n                    attn_mode=attn_type,\n                    checkpoint=use_checkpoint,\n                    sdp_backend=sdp_backend,\n                )\n                for d in range(depth)\n            ]\n        )\n        if not use_linear:\n            self.proj_out = zero_module(\n                nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0)\n            )\n        else:\n            # self.proj_out = zero_module(nn.Linear(in_channels, inner_dim))\n            self.proj_out = zero_module(nn.Linear(inner_dim, in_channels))\n        self.use_linear = use_linear\n\n    def forward(self, x, context=None):\n        # note: if no context is given, cross-attention defaults to self-attention\n        if not isinstance(context, list):\n            context = [context]\n        b, c, h, w = x.shape\n        x_in = x\n        x = self.norm(x)\n        if not self.use_linear:\n            x = self.proj_in(x)\n        x = rearrange(x, \"b c h w -> b (h w) c\").contiguous()\n        if self.use_linear:\n            x = self.proj_in(x)\n        for i, block in enumerate(self.transformer_blocks):\n            if i > 0 and len(context) == 1:\n                i = 0  # use same context for each block\n            x = block(x, context=context[i])\n        if self.use_linear:\n            x = self.proj_out(x)\n        x = rearrange(x, \"b (h w) c -> b c h w\", h=h, w=w).contiguous()\n        if not self.use_linear:\n            x = self.proj_out(x)\n        return x + x_in\n\n\nclass SimpleTransformer(nn.Module):\n    def __init__(\n        self,\n        dim: int,\n        depth: int,\n        heads: int,\n        dim_head: int,\n        context_dim: Optional[int] = None,\n        dropout: float = 0.0,\n        checkpoint: bool = True,\n    ):\n        super().__init__()\n        self.layers = nn.ModuleList([])\n        for _ in range(depth):\n            self.layers.append(\n                BasicTransformerBlock(\n                    dim,\n                    heads,\n                    dim_head,\n                    dropout=dropout,\n                    context_dim=context_dim,\n                    attn_mode=\"softmax-xformers\",\n                    checkpoint=checkpoint,\n                )\n            )\n\n    def forward(\n        self,\n        x: torch.Tensor,\n        context: Optional[torch.Tensor] = None,\n    ) -> torch.Tensor:\n        for layer in self.layers:\n            x = layer(x, context)\n        return x"
  },
  {
    "path": "ToonCrafter/lvdm/modules/encoders/condition.py",
    "content": "import torch\nimport torch.nn as nn\nimport kornia\nimport open_clip\nimport os\nfrom torch.utils.checkpoint import checkpoint\nfrom transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel\nfrom lvdm.common import autocast\nfrom ToonCrafter.utils.utils import count_params\n\n\nclass AbstractEncoder(nn.Module):\n    def __init__(self):\n        super().__init__()\n\n    def encode(self, *args, **kwargs):\n        raise NotImplementedError\n\n\nclass IdentityEncoder(AbstractEncoder):\n    def encode(self, x):\n        return x\n\n\nclass ClassEmbedder(nn.Module):\n    def __init__(self, embed_dim, n_classes=1000, key='class', ucg_rate=0.1):\n        super().__init__()\n        self.key = key\n        self.embedding = nn.Embedding(n_classes, embed_dim)\n        self.n_classes = n_classes\n        self.ucg_rate = ucg_rate\n\n    def forward(self, batch, key=None, disable_dropout=False):\n        if key is None:\n            key = self.key\n        # this is for use in crossattn\n        c = batch[key][:, None]\n        if self.ucg_rate > 0. and not disable_dropout:\n            mask = 1. - torch.bernoulli(torch.ones_like(c) * self.ucg_rate)\n            c = mask * c + (1 - mask) * torch.ones_like(c) * (self.n_classes - 1)\n            c = c.long()\n        c = self.embedding(c)\n        return c\n\n    def get_unconditional_conditioning(self, bs, device=\"cuda\"):\n        uc_class = self.n_classes - 1  # 1000 classes --> 0 ... 999, one extra class for ucg (class 1000)\n        uc = torch.ones((bs,), device=device) * uc_class\n        uc = {self.key: uc}\n        return uc\n\n\ndef disabled_train(self, mode=True):\n    \"\"\"Overwrite model.train with this function to make sure train/eval mode\n    does not change anymore.\"\"\"\n    return self\n\n\ndef get_available_devices():\n    devices = []\n    if torch.cuda.is_available():\n        devices.append(\"cuda\")\n    elif torch.backends.mps.is_available():\n        devices.append(\"mps\")\n    devices.append(torch.device(\"cpu\"))\n    return devices\n\n\ndef get_device(device):\n    devices = get_available_devices()\n    if device in devices:\n        return device\n    return devices[0]\n\n\nclass FrozenT5Embedder(AbstractEncoder):\n    \"\"\"Uses the T5 transformer encoder for text\"\"\"\n\n    def __init__(self, version=\"google/t5-v1_1-large\", device=\"cuda\", max_length=77,\n                 freeze=True):  # others are google/t5-v1_1-xl and google/t5-v1_1-xxl\n        super().__init__()\n        self.tokenizer = T5Tokenizer.from_pretrained(version)\n        self.transformer = T5EncoderModel.from_pretrained(version)\n        self.device = get_device(device)\n        self.max_length = max_length  # TODO: typical value?\n        if freeze:\n            self.freeze()\n\n    def freeze(self):\n        self.transformer = self.transformer.eval()\n        # self.train = disabled_train\n        for param in self.parameters():\n            param.requires_grad = False\n\n    def forward(self, text):\n        batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,\n                                        return_overflowing_tokens=False, padding=\"max_length\", return_tensors=\"pt\")\n        tokens = batch_encoding[\"input_ids\"].to(self.device)\n        outputs = self.transformer(input_ids=tokens)\n\n        z = outputs.last_hidden_state\n        return z\n\n    def encode(self, text):\n        return self(text)\n\n\nclass FrozenCLIPEmbedder(AbstractEncoder):\n    \"\"\"Uses the CLIP transformer encoder for text (from huggingface)\"\"\"\n    LAYERS = [\n        \"last\",\n        \"pooled\",\n        \"hidden\"\n    ]\n\n    def __init__(self, version=\"openai/clip-vit-large-patch14\", device=\"cuda\", max_length=77,\n                 freeze=True, layer=\"last\", layer_idx=None):  # clip-vit-base-patch32\n        super().__init__()\n        assert layer in self.LAYERS\n        self.tokenizer = CLIPTokenizer.from_pretrained(version)\n        self.transformer = CLIPTextModel.from_pretrained(version)\n        self.device = get_device(device)\n        self.max_length = max_length\n        if freeze:\n            self.freeze()\n        self.layer = layer\n        self.layer_idx = layer_idx\n        if layer == \"hidden\":\n            assert layer_idx is not None\n            assert 0 <= abs(layer_idx) <= 12\n\n    def freeze(self):\n        self.transformer = self.transformer.eval()\n        # self.train = disabled_train\n        for param in self.parameters():\n            param.requires_grad = False\n\n    def forward(self, text):\n        batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,\n                                        return_overflowing_tokens=False, padding=\"max_length\", return_tensors=\"pt\")\n        tokens = batch_encoding[\"input_ids\"].to(self.device)\n        outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer == \"hidden\")\n        if self.layer == \"last\":\n            z = outputs.last_hidden_state\n        elif self.layer == \"pooled\":\n            z = outputs.pooler_output[:, None, :]\n        else:\n            z = outputs.hidden_states[self.layer_idx]\n        return z\n\n    def encode(self, text):\n        return self(text)\n\n\nclass ClipImageEmbedder(nn.Module):\n    def __init__(\n            self,\n            model,\n            jit=False,\n            device='cuda' if torch.cuda.is_available() else 'cpu',\n            antialias=True,\n            ucg_rate=0.\n    ):\n        super().__init__()\n        from clip import load as load_clip\n        self.model, _ = load_clip(name=model, device=device, jit=jit)\n\n        self.antialias = antialias\n\n        self.register_buffer('mean', torch.Tensor([0.48145466, 0.4578275, 0.40821073]), persistent=False)\n        self.register_buffer('std', torch.Tensor([0.26862954, 0.26130258, 0.27577711]), persistent=False)\n        self.ucg_rate = ucg_rate\n\n    def preprocess(self, x):\n        # normalize to [0,1]\n        x = kornia.geometry.resize(x, (224, 224),\n                                   interpolation='bicubic', align_corners=True,\n                                   antialias=self.antialias)\n        x = (x + 1.) / 2.\n        # re-normalize according to clip\n        x = kornia.enhance.normalize(x, self.mean, self.std)\n        return x\n\n    def forward(self, x, no_dropout=False):\n        # x is assumed to be in range [-1,1]\n        out = self.model.encode_image(self.preprocess(x))\n        out = out.to(x.dtype)\n        if self.ucg_rate > 0. and not no_dropout:\n            out = torch.bernoulli((1. - self.ucg_rate) * torch.ones(out.shape[0], device=out.device))[:, None] * out\n        return out\n\n\nclass FrozenOpenCLIPEmbedder(AbstractEncoder):\n    \"\"\"\n    Uses the OpenCLIP transformer encoder for text\n    \"\"\"\n    LAYERS = [\n        # \"pooled\",\n        \"last\",\n        \"penultimate\"\n    ]\n\n    def __init__(self, arch=\"ViT-H-14\", version=\"laion2b_s32b_b79k\", device=\"cuda\", max_length=77,\n                 freeze=True, layer=\"last\"):\n        super().__init__()\n        assert layer in self.LAYERS\n        version = os.environ.get(\"USER_DEF_CLIP\", version)\n        model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)\n        del model.visual\n        self.model = model\n\n        self.device = get_device(device)\n        self.max_length = max_length\n        if freeze:\n            self.freeze()\n        self.layer = layer\n        if self.layer == \"last\":\n            self.layer_idx = 0\n        elif self.layer == \"penultimate\":\n            self.layer_idx = 1\n        else:\n            raise NotImplementedError()\n\n    def freeze(self):\n        self.model = self.model.eval()\n        for param in self.parameters():\n            param.requires_grad = False\n\n    def forward(self, text):\n        tokens = open_clip.tokenize(text)  # all clip models use 77 as context length\n        z = self.encode_with_transformer(tokens.to(self.device))\n        return z\n\n    def encode_with_transformer(self, text):\n        x = self.model.token_embedding(text)  # [batch_size, n_ctx, d_model]\n        x = x + self.model.positional_embedding\n        x = x.permute(1, 0, 2)  # NLD -> LND\n        x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)\n        x = x.permute(1, 0, 2)  # LND -> NLD\n        x = self.model.ln_final(x)\n        return x\n\n    def text_transformer_forward(self, x: torch.Tensor, attn_mask=None):\n        for i, r in enumerate(self.model.transformer.resblocks):\n            if i == len(self.model.transformer.resblocks) - self.layer_idx:\n                break\n            if self.model.transformer.grad_checkpointing and not torch.jit.is_scripting():\n                x = checkpoint(r, x, attn_mask)\n            else:\n                x = r(x, attn_mask=attn_mask)\n        return x\n\n    def encode(self, text):\n        return self(text)\n\n\nclass FrozenOpenCLIPImageEmbedder(AbstractEncoder):\n    \"\"\"\n    Uses the OpenCLIP vision transformer encoder for images\n    \"\"\"\n\n    def __init__(self, arch=\"ViT-H-14\", version=\"laion2b_s32b_b79k\", device=\"cuda\", max_length=77,\n                 freeze=True, layer=\"pooled\", antialias=True, ucg_rate=0.):\n        super().__init__()\n        version = os.environ.get(\"USER_DEF_CLIP\", version)\n        model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'),\n                                                            pretrained=version, )\n        del model.transformer\n        self.model = model\n        # self.mapper = torch.nn.Linear(1280, 1024)\n        self.device = get_device(device)\n        self.max_length = max_length\n        if freeze:\n            self.freeze()\n        self.layer = layer\n        if self.layer == \"penultimate\":\n            raise NotImplementedError()\n            self.layer_idx = 1\n\n        self.antialias = antialias\n\n        self.register_buffer('mean', torch.Tensor([0.48145466, 0.4578275, 0.40821073]), persistent=False)\n        self.register_buffer('std', torch.Tensor([0.26862954, 0.26130258, 0.27577711]), persistent=False)\n        self.ucg_rate = ucg_rate\n\n    def preprocess(self, x):\n        # normalize to [0,1]\n        x = kornia.geometry.resize(x, (224, 224),\n                                   interpolation='bicubic', align_corners=True,\n                                   antialias=self.antialias)\n        x = (x + 1.) / 2.\n        # renormalize according to clip\n        x = kornia.enhance.normalize(x, self.mean, self.std)\n        return x\n\n    def freeze(self):\n        self.model = self.model.eval()\n        for param in self.model.parameters():\n            param.requires_grad = False\n\n    @autocast\n    def forward(self, image, no_dropout=False):\n        z = self.encode_with_vision_transformer(image)\n        if self.ucg_rate > 0. and not no_dropout:\n            z = torch.bernoulli((1. - self.ucg_rate) * torch.ones(z.shape[0], device=z.device))[:, None] * z\n        return z\n\n    def encode_with_vision_transformer(self, img):\n        img = self.preprocess(img)\n        x = self.model.visual(img)\n        return x\n\n    def encode(self, text):\n        return self(text)\n\n\nclass FrozenOpenCLIPImageEmbedderV2(AbstractEncoder):\n    \"\"\"\n    Uses the OpenCLIP vision transformer encoder for images\n    \"\"\"\n\n    def __init__(self, arch=\"ViT-H-14\", version=\"laion2b_s32b_b79k\", device=\"cuda\",\n                 freeze=True, layer=\"pooled\", antialias=True):\n        super().__init__()\n        version = os.environ.get(\"USER_DEF_CLIP\", version)\n        model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'),\n                                                            pretrained=version, )\n        del model.transformer\n        self.model = model\n        self.device = get_device(device)\n\n        if freeze:\n            self.freeze()\n        self.layer = layer\n        if self.layer == \"penultimate\":\n            raise NotImplementedError()\n            self.layer_idx = 1\n\n        self.antialias = antialias\n\n        self.register_buffer('mean', torch.Tensor([0.48145466, 0.4578275, 0.40821073]), persistent=False)\n        self.register_buffer('std', torch.Tensor([0.26862954, 0.26130258, 0.27577711]), persistent=False)\n\n    def preprocess(self, x):\n        # normalize to [0,1]\n        x = kornia.geometry.resize(x, (224, 224),\n                                   interpolation='bicubic', align_corners=True,\n                                   antialias=self.antialias)\n        x = (x + 1.) / 2.\n        # renormalize according to clip\n        x = kornia.enhance.normalize(x, self.mean, self.std)\n        return x\n\n    def freeze(self):\n        self.model = self.model.eval()\n        for param in self.model.parameters():\n            param.requires_grad = False\n\n    def forward(self, image, no_dropout=False):\n        # image: b c h w\n        z = self.encode_with_vision_transformer(image)\n        return z\n\n    def encode_with_vision_transformer(self, x):\n        x = self.preprocess(x)\n\n        # to patches - whether to use dual patchnorm - https://arxiv.org/abs/2302.01327v1\n        # if self.model.visual.input_patchnorm:\n        #     # einops - rearrange(x, 'b c (h p1) (w p2) -> b (h w) (c p1 p2)')\n        #     x = x.reshape(x.shape[0], x.shape[1], self.model.visual.grid_size[0], self.model.visual.patch_size[0], self.model.visual.grid_size[1], self.model.visual.patch_size[1])\n        #     x = x.permute(0, 2, 4, 1, 3, 5)\n        #     x = x.reshape(x.shape[0], self.model.visual.grid_size[0] * self.model.visual.grid_size[1], -1)\n        #     x = self.model.visual.patchnorm_pre_ln(x)\n        #     x = self.model.visual.conv1(x)\n        # else:\n        x = self.model.visual.conv1(x)  # shape = [*, width, grid, grid]\n        x = x.reshape(x.shape[0], x.shape[1], -1)  # shape = [*, width, grid ** 2]\n        x = x.permute(0, 2, 1)  # shape = [*, grid ** 2, width]\n\n        # class embeddings and positional embeddings\n        x = torch.cat(\n            [self.model.visual.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device),\n             x], dim=1)  # shape = [*, grid ** 2 + 1, width]\n        x = x + self.model.visual.positional_embedding.to(x.dtype)\n\n        # a patch_dropout of 0. would mean it is disabled and this function would do nothing but return what was passed in\n        x = self.model.visual.patch_dropout(x)\n        x = self.model.visual.ln_pre(x)\n\n        x = x.permute(1, 0, 2)  # NLD -> LND\n        x = self.model.visual.transformer(x)\n        x = x.permute(1, 0, 2)  # LND -> NLD\n\n        return x\n\n\nclass FrozenCLIPT5Encoder(AbstractEncoder):\n    def __init__(self, clip_version=\"openai/clip-vit-large-patch14\", t5_version=\"google/t5-v1_1-xl\", device=\"cuda\",\n                 clip_max_length=77, t5_max_length=77):\n        super().__init__()\n        self.clip_encoder = FrozenCLIPEmbedder(clip_version, device, max_length=clip_max_length)\n        self.t5_encoder = FrozenT5Embedder(t5_version, device, max_length=t5_max_length)\n        print(f\"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder) * 1.e-6:.2f} M parameters, \"\n              f\"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder) * 1.e-6:.2f} M params.\")\n\n    def encode(self, text):\n        return self(text)\n\n    def forward(self, text):\n        clip_z = self.clip_encoder.encode(text)\n        t5_z = self.t5_encoder.encode(text)\n        return [clip_z, t5_z]\n"
  },
  {
    "path": "ToonCrafter/lvdm/modules/encoders/resampler.py",
    "content": "# modified from https://github.com/mlfoundations/open_flamingo/blob/main/open_flamingo/src/helpers.py\n# and https://github.com/lucidrains/imagen-pytorch/blob/main/imagen_pytorch/imagen_pytorch.py\n# and https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter/resampler.py\nimport math\nimport torch\nimport torch.nn as nn\n\n\nclass ImageProjModel(nn.Module):\n    \"\"\"Projection Model\"\"\"\n    def __init__(self, cross_attention_dim=1024, clip_embeddings_dim=1024, clip_extra_context_tokens=4):\n        super().__init__()        \n        self.cross_attention_dim = cross_attention_dim\n        self.clip_extra_context_tokens = clip_extra_context_tokens\n        self.proj = nn.Linear(clip_embeddings_dim, self.clip_extra_context_tokens * cross_attention_dim)\n        self.norm = nn.LayerNorm(cross_attention_dim)\n        \n    def forward(self, image_embeds):\n        #embeds = image_embeds\n        embeds = image_embeds.type(list(self.proj.parameters())[0].dtype)\n        clip_extra_context_tokens = self.proj(embeds).reshape(-1, self.clip_extra_context_tokens, self.cross_attention_dim)\n        clip_extra_context_tokens = self.norm(clip_extra_context_tokens)\n        return clip_extra_context_tokens\n\n\n# FFN\ndef FeedForward(dim, mult=4):\n    inner_dim = int(dim * mult)\n    return nn.Sequential(\n        nn.LayerNorm(dim),\n        nn.Linear(dim, inner_dim, bias=False),\n        nn.GELU(),\n        nn.Linear(inner_dim, dim, bias=False),\n    )\n    \n    \ndef reshape_tensor(x, heads):\n    bs, length, width = x.shape\n    #(bs, length, width) --> (bs, length, n_heads, dim_per_head)\n    x = x.view(bs, length, heads, -1)\n    # (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head)\n    x = x.transpose(1, 2)\n    # (bs, n_heads, length, dim_per_head) --> (bs*n_heads, length, dim_per_head)\n    x = x.reshape(bs, heads, length, -1)\n    return x\n\n\nclass PerceiverAttention(nn.Module):\n    def __init__(self, *, dim, dim_head=64, heads=8):\n        super().__init__()\n        self.scale = dim_head**-0.5\n        self.dim_head = dim_head\n        self.heads = heads\n        inner_dim = dim_head * heads\n\n        self.norm1 = nn.LayerNorm(dim)\n        self.norm2 = nn.LayerNorm(dim)\n\n        self.to_q = nn.Linear(dim, inner_dim, bias=False)\n        self.to_kv = nn.Linear(dim, inner_dim * 2, bias=False)\n        self.to_out = nn.Linear(inner_dim, dim, bias=False)\n\n\n    def forward(self, x, latents):\n        \"\"\"\n        Args:\n            x (torch.Tensor): image features\n                shape (b, n1, D)\n            latent (torch.Tensor): latent features\n                shape (b, n2, D)\n        \"\"\"\n        x = self.norm1(x)\n        latents = self.norm2(latents)\n        \n        b, l, _ = latents.shape\n\n        q = self.to_q(latents)\n        kv_input = torch.cat((x, latents), dim=-2)\n        k, v = self.to_kv(kv_input).chunk(2, dim=-1)\n        \n        q = reshape_tensor(q, self.heads)\n        k = reshape_tensor(k, self.heads)\n        v = reshape_tensor(v, self.heads)\n\n        # attention\n        scale = 1 / math.sqrt(math.sqrt(self.dim_head))\n        weight = (q * scale) @ (k * scale).transpose(-2, -1) # More stable with f16 than dividing afterwards\n        weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype)\n        out = weight @ v\n        \n        out = out.permute(0, 2, 1, 3).reshape(b, l, -1)\n\n        return self.to_out(out)\n\n\nclass Resampler(nn.Module):\n    def __init__(\n        self,\n        dim=1024,\n        depth=8,\n        dim_head=64,\n        heads=16,\n        num_queries=8,\n        embedding_dim=768,\n        output_dim=1024,\n        ff_mult=4,\n        video_length=None, # using frame-wise version or not\n    ):\n        super().__init__()\n        ## queries for a single frame / image\n        self.num_queries = num_queries \n        self.video_length = video_length\n\n        ## <num_queries> queries for each frame\n        if video_length is not None: \n            num_queries = num_queries * video_length\n\n        self.latents = nn.Parameter(torch.randn(1, num_queries, dim) / dim**0.5)\n        self.proj_in = nn.Linear(embedding_dim, dim)\n        self.proj_out = nn.Linear(dim, output_dim)\n        self.norm_out = nn.LayerNorm(output_dim)\n        \n        self.layers = nn.ModuleList([])\n        for _ in range(depth):\n            self.layers.append(\n                nn.ModuleList(\n                    [\n                        PerceiverAttention(dim=dim, dim_head=dim_head, heads=heads),\n                        FeedForward(dim=dim, mult=ff_mult),\n                    ]\n                )\n            )\n\n    def forward(self, x):\n        latents = self.latents.repeat(x.size(0), 1, 1) ## B (T L) C\n        x = self.proj_in(x)\n        \n        for attn, ff in self.layers:\n            latents = attn(x, latents) + latents\n            latents = ff(latents) + latents\n            \n        latents = self.proj_out(latents)\n        latents = self.norm_out(latents) # B L C or B (T L) C\n\n        return latents"
  },
  {
    "path": "ToonCrafter/lvdm/modules/networks/ae_modules.py",
    "content": "# pytorch_diffusion + derived encoder decoder\nimport math\n\nimport torch\nimport numpy as np\nimport torch.nn as nn\nfrom einops import rearrange\n\nfrom ToonCrafter.utils.utils import instantiate_from_config\nfrom lvdm.modules.attention import LinearAttention\n\n\ndef nonlinearity(x):\n    # swish\n    return x * torch.sigmoid(x)\n\n\ndef Normalize(in_channels, num_groups=32):\n    return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)\n\n\nclass LinAttnBlock(LinearAttention):\n    \"\"\"to match AttnBlock usage\"\"\"\n\n    def __init__(self, in_channels):\n        super().__init__(dim=in_channels, heads=1, dim_head=in_channels)\n\n\nclass AttnBlock(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = Normalize(in_channels)\n        self.q = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.k = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.v = torch.nn.Conv2d(in_channels,\n                                 in_channels,\n                                 kernel_size=1,\n                                 stride=1,\n                                 padding=0)\n        self.proj_out = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=1,\n                                        stride=1,\n                                        padding=0)\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        b, c, h, w = q.shape\n        q = q.reshape(b, c, h * w)  # bcl\n        q = q.permute(0, 2, 1)   # bcl -> blc l=hw\n        k = k.reshape(b, c, h * w)  # bcl\n\n        w_ = torch.bmm(q, k)    # b,hw,hw    w[b,i,j]=sum_c q[b,i,c]k[b,c,j]\n        w_ = w_ * (int(c)**(-0.5))\n        w_ = torch.nn.functional.softmax(w_, dim=2)\n\n        # attend to values\n        v = v.reshape(b, c, h * w)\n        w_ = w_.permute(0, 2, 1)   # b,hw,hw (first hw of k, second of q)\n        h_ = torch.bmm(v, w_)     # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]\n        h_ = h_.reshape(b, c, h, w)\n\n        h_ = self.proj_out(h_)\n\n        return x + h_\n\n\ndef make_attn(in_channels, attn_type=\"vanilla\"):\n    assert attn_type in [\"vanilla\", \"linear\", \"none\"], f'attn_type {attn_type} unknown'\n    # print(f\"making attention of type '{attn_type}' with {in_channels} in_channels\")\n    if attn_type == \"vanilla\":\n        return AttnBlock(in_channels)\n    elif attn_type == \"none\":\n        return nn.Identity(in_channels)\n    else:\n        return LinAttnBlock(in_channels)\n\n\nclass Downsample(nn.Module):\n    def __init__(self, in_channels, with_conv):\n        super().__init__()\n        self.with_conv = with_conv\n        self.in_channels = in_channels\n        if self.with_conv:\n            # no asymmetric padding in torch conv, must do it ourselves\n            self.conv = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=3,\n                                        stride=2,\n                                        padding=0)\n\n    def forward(self, x):\n        if self.with_conv:\n            pad = (0, 1, 0, 1)\n            x = torch.nn.functional.pad(x, pad, mode=\"constant\", value=0)\n            x = self.conv(x)\n        else:\n            x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)\n        return x\n\n\nclass Upsample(nn.Module):\n    def __init__(self, in_channels, with_conv):\n        super().__init__()\n        self.with_conv = with_conv\n        self.in_channels = in_channels\n        if self.with_conv:\n            self.conv = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode=\"nearest\")\n        if self.with_conv:\n            x = self.conv(x)\n        return x\n\n\ndef get_timestep_embedding(timesteps, embedding_dim):\n    \"\"\"\n    This matches the implementation in Denoising Diffusion Probabilistic Models:\n    From Fairseq.\n    Build sinusoidal embeddings.\n    This matches the implementation in tensor2tensor, but differs slightly\n    from the description in Section 3.5 of \"Attention Is All You Need\".\n    \"\"\"\n    assert len(timesteps.shape) == 1\n\n    half_dim = embedding_dim // 2\n    emb = math.log(10000) / (half_dim - 1)\n    emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)\n    emb = emb.to(device=timesteps.device)\n    emb = timesteps.float()[:, None] * emb[None, :]\n    emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)\n    if embedding_dim % 2 == 1:  # zero pad\n        emb = torch.nn.functional.pad(emb, (0, 1, 0, 0))\n    return emb\n\n\nclass ResnetBlock(nn.Module):\n    def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,\n                 dropout, temb_channels=512):\n        super().__init__()\n        self.in_channels = in_channels\n        out_channels = in_channels if out_channels is None else out_channels\n        self.out_channels = out_channels\n        self.use_conv_shortcut = conv_shortcut\n\n        self.norm1 = Normalize(in_channels)\n        self.conv1 = torch.nn.Conv2d(in_channels,\n                                     out_channels,\n                                     kernel_size=3,\n                                     stride=1,\n                                     padding=1)\n        if temb_channels > 0:\n            self.temb_proj = torch.nn.Linear(temb_channels,\n                                             out_channels)\n        self.norm2 = Normalize(out_channels)\n        self.dropout = torch.nn.Dropout(dropout)\n        self.conv2 = torch.nn.Conv2d(out_channels,\n                                     out_channels,\n                                     kernel_size=3,\n                                     stride=1,\n                                     padding=1)\n        if self.in_channels != self.out_channels:\n            if self.use_conv_shortcut:\n                self.conv_shortcut = torch.nn.Conv2d(in_channels,\n                                                     out_channels,\n                                                     kernel_size=3,\n                                                     stride=1,\n                                                     padding=1)\n            else:\n                self.nin_shortcut = torch.nn.Conv2d(in_channels,\n                                                    out_channels,\n                                                    kernel_size=1,\n                                                    stride=1,\n                                                    padding=0)\n\n    def forward(self, x, temb):\n        h = x\n        h = self.norm1(h)\n        h = nonlinearity(h)\n        h = self.conv1(h)\n\n        if temb is not None:\n            h = h + self.temb_proj(nonlinearity(temb))[:, :, None, None]\n\n        h = self.norm2(h)\n        h = nonlinearity(h)\n        h = self.dropout(h)\n        h = self.conv2(h)\n\n        if self.in_channels != self.out_channels:\n            if self.use_conv_shortcut:\n                x = self.conv_shortcut(x)\n            else:\n                x = self.nin_shortcut(x)\n\n        return x + h\n\n\nclass Model(nn.Module):\n    def __init__(self, *, ch, out_ch, ch_mult=(1, 2, 4, 8), num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,\n                 resolution, use_timestep=True, use_linear_attn=False, attn_type=\"vanilla\"):\n        super().__init__()\n        if use_linear_attn:\n            attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = self.ch * 4\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n\n        self.use_timestep = use_timestep\n        if self.use_timestep:\n            # timestep embedding\n            self.temb = nn.Module()\n            self.temb.dense = nn.ModuleList([\n                torch.nn.Linear(self.ch,\n                                self.temb_ch),\n                torch.nn.Linear(self.temb_ch,\n                                self.temb_ch),\n            ])\n\n        # downsampling\n        self.conv_in = torch.nn.Conv2d(in_channels,\n                                       self.ch,\n                                       kernel_size=3,\n                                       stride=1,\n                                       padding=1)\n\n        curr_res = resolution\n        in_ch_mult = (1,) + tuple(ch_mult)\n        self.down = nn.ModuleList()\n        for i_level in range(self.num_resolutions):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_in = ch * in_ch_mult[i_level]\n            block_out = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks):\n                block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            down = nn.Module()\n            down.block = block\n            down.attn = attn\n            if i_level != self.num_resolutions - 1:\n                down.downsample = Downsample(block_in, resamp_with_conv)\n                curr_res = curr_res // 2\n            self.down.append(down)\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n        self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)\n        self.mid.block_2 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n\n        # upsampling\n        self.up = nn.ModuleList()\n        for i_level in reversed(range(self.num_resolutions)):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_out = ch * ch_mult[i_level]\n            skip_in = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks + 1):\n                if i_block == self.num_res_blocks:\n                    skip_in = ch * in_ch_mult[i_level]\n                block.append(ResnetBlock(in_channels=block_in + skip_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            up = nn.Module()\n            up.block = block\n            up.attn = attn\n            if i_level != 0:\n                up.upsample = Upsample(block_in, resamp_with_conv)\n                curr_res = curr_res * 2\n            self.up.insert(0, up)  # prepend to get consistent order\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        out_ch,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x, t=None, context=None):\n        # assert x.shape[2] == x.shape[3] == self.resolution\n        if context is not None:\n            # assume aligned context, cat along channel axis\n            x = torch.cat((x, context), dim=1)\n        if self.use_timestep:\n            # timestep embedding\n            assert t is not None\n            temb = get_timestep_embedding(t, self.ch)\n            temb = self.temb.dense[0](temb)\n            temb = nonlinearity(temb)\n            temb = self.temb.dense[1](temb)\n        else:\n            temb = None\n\n        # downsampling\n        hs = [self.conv_in(x)]\n        for i_level in range(self.num_resolutions):\n            for i_block in range(self.num_res_blocks):\n                h = self.down[i_level].block[i_block](hs[-1], temb)\n                if len(self.down[i_level].attn) > 0:\n                    h = self.down[i_level].attn[i_block](h)\n                hs.append(h)\n            if i_level != self.num_resolutions - 1:\n                hs.append(self.down[i_level].downsample(hs[-1]))\n\n        # middle\n        h = hs[-1]\n        h = self.mid.block_1(h, temb)\n        h = self.mid.attn_1(h)\n        h = self.mid.block_2(h, temb)\n\n        # upsampling\n        for i_level in reversed(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks + 1):\n                h = self.up[i_level].block[i_block](\n                    torch.cat([h, hs.pop()], dim=1), temb)\n                if len(self.up[i_level].attn) > 0:\n                    h = self.up[i_level].attn[i_block](h)\n            if i_level != 0:\n                h = self.up[i_level].upsample(h)\n\n        # end\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        return h\n\n    def get_last_layer(self):\n        return self.conv_out.weight\n\n\nclass Encoder(nn.Module):\n    def __init__(self, *, ch, out_ch, ch_mult=(1, 2, 4, 8), num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,\n                 resolution, z_channels, double_z=True, use_linear_attn=False, attn_type=\"vanilla\",\n                 **ignore_kwargs):\n        super().__init__()\n        if use_linear_attn:\n            attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n\n        # downsampling\n        self.conv_in = torch.nn.Conv2d(in_channels,\n                                       self.ch,\n                                       kernel_size=3,\n                                       stride=1,\n                                       padding=1)\n\n        curr_res = resolution\n        in_ch_mult = (1,) + tuple(ch_mult)\n        self.in_ch_mult = in_ch_mult\n        self.down = nn.ModuleList()\n        for i_level in range(self.num_resolutions):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_in = ch * in_ch_mult[i_level]\n            block_out = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks):\n                block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            down = nn.Module()\n            down.block = block\n            down.attn = attn\n            if i_level != self.num_resolutions - 1:\n                down.downsample = Downsample(block_in, resamp_with_conv)\n                curr_res = curr_res // 2\n            self.down.append(down)\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n        self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)\n        self.mid.block_2 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        2 * z_channels if double_z else z_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x, return_hidden_states=False):\n        # timestep embedding\n        temb = None\n\n        # print(f'encoder-input={x.shape}')\n        # downsampling\n        hs = [self.conv_in(x)]\n\n        # if we return hidden states for decoder usage, we will store them in a list\n        if return_hidden_states:\n            hidden_states = []\n        # print(f'encoder-conv in feat={hs[0].shape}')\n        for i_level in range(self.num_resolutions):\n            for i_block in range(self.num_res_blocks):\n                h = self.down[i_level].block[i_block](hs[-1], temb)\n                # print(f'encoder-down feat={h.shape}')\n                if len(self.down[i_level].attn) > 0:\n                    h = self.down[i_level].attn[i_block](h)\n                hs.append(h)\n            if return_hidden_states:\n                hidden_states.append(h)\n            if i_level != self.num_resolutions - 1:\n                # print(f'encoder-downsample (input)={hs[-1].shape}')\n                hs.append(self.down[i_level].downsample(hs[-1]))\n                # print(f'encoder-downsample (output)={hs[-1].shape}')\n        if return_hidden_states:\n            hidden_states.append(hs[0])\n        # middle\n        h = hs[-1]\n        h = self.mid.block_1(h, temb)\n        # print(f'encoder-mid1 feat={h.shape}')\n        h = self.mid.attn_1(h)\n        h = self.mid.block_2(h, temb)\n        # print(f'encoder-mid2 feat={h.shape}')\n\n        # end\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        # print(f'end feat={h.shape}')\n        if return_hidden_states:\n            return h, hidden_states\n        else:\n            return h\n\n\nclass Decoder(nn.Module):\n    def __init__(self, *, ch, out_ch, ch_mult=(1, 2, 4, 8), num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,\n                 resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,\n                 attn_type=\"vanilla\", **ignorekwargs):\n        super().__init__()\n        if use_linear_attn:\n            attn_type = \"linear\"\n        self.ch = ch\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.in_channels = in_channels\n        self.give_pre_end = give_pre_end\n        self.tanh_out = tanh_out\n\n        # compute in_ch_mult, block_in and curr_res at lowest res\n        in_ch_mult = (1,) + tuple(ch_mult)\n        block_in = ch * ch_mult[self.num_resolutions - 1]\n        curr_res = resolution // 2**(self.num_resolutions - 1)\n        self.z_shape = (1, z_channels, curr_res, curr_res)\n        print(\"AE working on z of shape {} = {} dimensions.\".format(\n            self.z_shape, np.prod(self.z_shape)))\n\n        # z to block_in\n        self.conv_in = torch.nn.Conv2d(z_channels,\n                                       block_in,\n                                       kernel_size=3,\n                                       stride=1,\n                                       padding=1)\n\n        # middle\n        self.mid = nn.Module()\n        self.mid.block_1 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n        self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)\n        self.mid.block_2 = ResnetBlock(in_channels=block_in,\n                                       out_channels=block_in,\n                                       temb_channels=self.temb_ch,\n                                       dropout=dropout)\n\n        # upsampling\n        self.up = nn.ModuleList()\n        for i_level in reversed(range(self.num_resolutions)):\n            block = nn.ModuleList()\n            attn = nn.ModuleList()\n            block_out = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks + 1):\n                block.append(ResnetBlock(in_channels=block_in,\n                                         out_channels=block_out,\n                                         temb_channels=self.temb_ch,\n                                         dropout=dropout))\n                block_in = block_out\n                if curr_res in attn_resolutions:\n                    attn.append(make_attn(block_in, attn_type=attn_type))\n            up = nn.Module()\n            up.block = block\n            up.attn = attn\n            if i_level != 0:\n                up.upsample = Upsample(block_in, resamp_with_conv)\n                curr_res = curr_res * 2\n            self.up.insert(0, up)  # prepend to get consistent order\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        out_ch,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, z):\n        # assert z.shape[1:] == self.z_shape[1:]\n        self.last_z_shape = z.shape\n\n        # print(f'decoder-input={z.shape}')\n        # timestep embedding\n        temb = None\n\n        # z to block_in\n        h = self.conv_in(z)\n        # print(f'decoder-conv in feat={h.shape}')\n\n        # middle\n        h = self.mid.block_1(h, temb)\n        h = self.mid.attn_1(h)\n        h = self.mid.block_2(h, temb)\n        # print(f'decoder-mid feat={h.shape}')\n\n        # upsampling\n        for i_level in reversed(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks + 1):\n                h = self.up[i_level].block[i_block](h, temb)\n                if len(self.up[i_level].attn) > 0:\n                    h = self.up[i_level].attn[i_block](h)\n                # print(f'decoder-up feat={h.shape}')\n            if i_level != 0:\n                h = self.up[i_level].upsample(h)\n                # print(f'decoder-upsample feat={h.shape}')\n\n        # end\n        if self.give_pre_end:\n            return h\n\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        # print(f'decoder-conv_out feat={h.shape}')\n        if self.tanh_out:\n            h = torch.tanh(h)\n        return h\n\n\nclass SimpleDecoder(nn.Module):\n    def __init__(self, in_channels, out_channels, *args, **kwargs):\n        super().__init__()\n        self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),\n                                    ResnetBlock(in_channels=in_channels,\n                                                out_channels=2 * in_channels,\n                                                temb_channels=0, dropout=0.0),\n                                    ResnetBlock(in_channels=2 * in_channels,\n                                                out_channels=4 * in_channels,\n                                                temb_channels=0, dropout=0.0),\n                                    ResnetBlock(in_channels=4 * in_channels,\n                                                out_channels=2 * in_channels,\n                                                temb_channels=0, dropout=0.0),\n                                    nn.Conv2d(2 * in_channels, in_channels, 1),\n                                    Upsample(in_channels, with_conv=True)])\n        # end\n        self.norm_out = Normalize(in_channels)\n        self.conv_out = torch.nn.Conv2d(in_channels,\n                                        out_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        for i, layer in enumerate(self.model):\n            if i in [1, 2, 3]:\n                x = layer(x, None)\n            else:\n                x = layer(x)\n\n        h = self.norm_out(x)\n        h = nonlinearity(h)\n        x = self.conv_out(h)\n        return x\n\n\nclass UpsampleDecoder(nn.Module):\n    def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,\n                 ch_mult=(2, 2), dropout=0.0):\n        super().__init__()\n        # upsampling\n        self.temb_ch = 0\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        block_in = in_channels\n        curr_res = resolution // 2 ** (self.num_resolutions - 1)\n        self.res_blocks = nn.ModuleList()\n        self.upsample_blocks = nn.ModuleList()\n        for i_level in range(self.num_resolutions):\n            res_block = []\n            block_out = ch * ch_mult[i_level]\n            for i_block in range(self.num_res_blocks + 1):\n                res_block.append(ResnetBlock(in_channels=block_in,\n                                             out_channels=block_out,\n                                             temb_channels=self.temb_ch,\n                                             dropout=dropout))\n                block_in = block_out\n            self.res_blocks.append(nn.ModuleList(res_block))\n            if i_level != self.num_resolutions - 1:\n                self.upsample_blocks.append(Upsample(block_in, True))\n                curr_res = curr_res * 2\n\n        # end\n        self.norm_out = Normalize(block_in)\n        self.conv_out = torch.nn.Conv2d(block_in,\n                                        out_channels,\n                                        kernel_size=3,\n                                        stride=1,\n                                        padding=1)\n\n    def forward(self, x):\n        # upsampling\n        h = x\n        for k, i_level in enumerate(range(self.num_resolutions)):\n            for i_block in range(self.num_res_blocks + 1):\n                h = self.res_blocks[i_level][i_block](h, None)\n            if i_level != self.num_resolutions - 1:\n                h = self.upsample_blocks[k](h)\n        h = self.norm_out(h)\n        h = nonlinearity(h)\n        h = self.conv_out(h)\n        return h\n\n\nclass LatentRescaler(nn.Module):\n    def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2):\n        super().__init__()\n        # residual block, interpolate, residual block\n        self.factor = factor\n        self.conv_in = nn.Conv2d(in_channels,\n                                 mid_channels,\n                                 kernel_size=3,\n                                 stride=1,\n                                 padding=1)\n        self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,\n                                                     out_channels=mid_channels,\n                                                     temb_channels=0,\n                                                     dropout=0.0) for _ in range(depth)])\n        self.attn = AttnBlock(mid_channels)\n        self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,\n                                                     out_channels=mid_channels,\n                                                     temb_channels=0,\n                                                     dropout=0.0) for _ in range(depth)])\n\n        self.conv_out = nn.Conv2d(mid_channels,\n                                  out_channels,\n                                  kernel_size=1,\n                                  )\n\n    def forward(self, x):\n        x = self.conv_in(x)\n        for block in self.res_block1:\n            x = block(x, None)\n        x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2] * self.factor)), int(round(x.shape[3] * self.factor))))\n        x = self.attn(x)\n        for block in self.res_block2:\n            x = block(x, None)\n        x = self.conv_out(x)\n        return x\n\n\nclass MergedRescaleEncoder(nn.Module):\n    def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks,\n                 attn_resolutions, dropout=0.0, resamp_with_conv=True,\n                 ch_mult=(1, 2, 4, 8), rescale_factor=1.0, rescale_module_depth=1):\n        super().__init__()\n        intermediate_chn = ch * ch_mult[-1]\n        self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult,\n                               z_channels=intermediate_chn, double_z=False, resolution=resolution,\n                               attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv,\n                               out_ch=None)\n        self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn,\n                                       mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth)\n\n    def forward(self, x):\n        x = self.encoder(x)\n        x = self.rescaler(x)\n        return x\n\n\nclass MergedRescaleDecoder(nn.Module):\n    def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1, 2, 4, 8),\n                 dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1):\n        super().__init__()\n        tmp_chn = z_channels * ch_mult[-1]\n        self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout,\n                               resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks,\n                               ch_mult=ch_mult, resolution=resolution, ch=ch)\n        self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn,\n                                       out_channels=tmp_chn, depth=rescale_module_depth)\n\n    def forward(self, x):\n        x = self.rescaler(x)\n        x = self.decoder(x)\n        return x\n\n\nclass Upsampler(nn.Module):\n    def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2):\n        super().__init__()\n        assert out_size >= in_size\n        num_blocks = int(np.log2(out_size // in_size)) + 1\n        factor_up = 1. + (out_size % in_size)\n        print(f\"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}\")\n        self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2 * in_channels,\n                                       out_channels=in_channels)\n        self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2,\n                               attn_resolutions=[], in_channels=None, ch=in_channels,\n                               ch_mult=[ch_mult for _ in range(num_blocks)])\n\n    def forward(self, x):\n        x = self.rescaler(x)\n        x = self.decoder(x)\n        return x\n\n\nclass Resize(nn.Module):\n    def __init__(self, in_channels=None, learned=False, mode=\"bilinear\"):\n        super().__init__()\n        self.with_conv = learned\n        self.mode = mode\n        if self.with_conv:\n            print(f\"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode\")\n            raise NotImplementedError()\n            assert in_channels is not None\n            # no asymmetric padding in torch conv, must do it ourselves\n            self.conv = torch.nn.Conv2d(in_channels,\n                                        in_channels,\n                                        kernel_size=4,\n                                        stride=2,\n                                        padding=1)\n\n    def forward(self, x, scale_factor=1.0):\n        if scale_factor == 1.0:\n            return x\n        else:\n            x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor)\n        return x\n\n\nclass FirstStagePostProcessor(nn.Module):\n\n    def __init__(self, ch_mult: list, in_channels,\n                 pretrained_model: nn.Module = None,\n                 reshape=False,\n                 n_channels=None,\n                 dropout=0.,\n                 pretrained_config=None):\n        super().__init__()\n        if pretrained_config is None:\n            assert pretrained_model is not None, 'Either \"pretrained_model\" or \"pretrained_config\" must not be None'\n            self.pretrained_model = pretrained_model\n        else:\n            assert pretrained_config is not None, 'Either \"pretrained_model\" or \"pretrained_config\" must not be None'\n            self.instantiate_pretrained(pretrained_config)\n\n        self.do_reshape = reshape\n\n        if n_channels is None:\n            n_channels = self.pretrained_model.encoder.ch\n\n        self.proj_norm = Normalize(in_channels, num_groups=in_channels // 2)\n        self.proj = nn.Conv2d(in_channels, n_channels, kernel_size=3,\n                              stride=1, padding=1)\n\n        blocks = []\n        downs = []\n        ch_in = n_channels\n        for m in ch_mult:\n            blocks.append(ResnetBlock(in_channels=ch_in, out_channels=m * n_channels, dropout=dropout))\n            ch_in = m * n_channels\n            downs.append(Downsample(ch_in, with_conv=False))\n\n        self.model = nn.ModuleList(blocks)\n        self.downsampler = nn.ModuleList(downs)\n\n    def instantiate_pretrained(self, config):\n        model = instantiate_from_config(config)\n        self.pretrained_model = model.eval()\n        # self.pretrained_model.train = False\n        for param in self.pretrained_model.parameters():\n            param.requires_grad = False\n\n    @torch.no_grad()\n    def encode_with_pretrained(self, x):\n        c = self.pretrained_model.encode(x)\n        if isinstance(c, DiagonalGaussianDistribution):\n            c = c.mode()\n        return c\n\n    def forward(self, x):\n        z_fs = self.encode_with_pretrained(x)\n        z = self.proj_norm(z_fs)\n        z = self.proj(z)\n        z = nonlinearity(z)\n\n        for submodel, downmodel in zip(self.model, self.downsampler):\n            z = submodel(z, temb=None)\n            z = downmodel(z)\n\n        if self.do_reshape:\n            z = rearrange(z, 'b c h w -> b (h w) c')\n        return z\n"
  },
  {
    "path": "ToonCrafter/lvdm/modules/networks/openaimodel3d.py",
    "content": "from functools import partial\nfrom abc import abstractmethod\nimport torch\nimport torch.nn as nn\nfrom einops import rearrange\nimport torch.nn.functional as F\nfrom lvdm.models.utils_diffusion import timestep_embedding\nfrom lvdm.common import checkpoint\nfrom lvdm.basics import (\n    zero_module,\n    conv_nd,\n    linear,\n    avg_pool_nd,\n    normalization\n)\nfrom lvdm.modules.attention import SpatialTransformer, TemporalTransformer\n\n\nclass TimestepBlock(nn.Module):\n    \"\"\"\n    Any module where forward() takes timestep embeddings as a second argument.\n    \"\"\"\n    @abstractmethod\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the module to `x` given `emb` timestep embeddings.\n        \"\"\"\n\n\nclass TimestepEmbedSequential(nn.Sequential, TimestepBlock):\n    \"\"\"\n    A sequential module that passes timestep embeddings to the children that\n    support it as an extra input.\n    \"\"\"\n\n    def forward(self, x, emb, context=None, batch_size=None):\n        for layer in self:\n            if isinstance(layer, TimestepBlock):\n                x = layer(x, emb, batch_size=batch_size)\n            elif isinstance(layer, SpatialTransformer):\n                x = layer(x, context)\n            elif isinstance(layer, TemporalTransformer):\n                x = rearrange(x, '(b f) c h w -> b c f h w', b=batch_size)\n                x = layer(x, context)\n                x = rearrange(x, 'b c f h w -> (b f) c h w')\n            else:\n                x = layer(x)\n        return x\n\n\nclass Downsample(nn.Module):\n    \"\"\"\n    A downsampling layer with an optional convolution.\n    :param channels: channels in the inputs and outputs.\n    :param use_conv: a bool determining if a convolution is applied.\n    :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then\n                 downsampling occurs in the inner-two dimensions.\n    \"\"\"\n\n    def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):\n        super().__init__()\n        self.channels = channels\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.dims = dims\n        stride = 2 if dims != 3 else (1, 2, 2)\n        if use_conv:\n            self.op = conv_nd(\n                dims, self.channels, self.out_channels, 3, stride=stride, padding=padding\n            )\n        else:\n            assert self.channels == self.out_channels\n            self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)\n\n    def forward(self, x):\n        assert x.shape[1] == self.channels\n        return self.op(x)\n\n\nclass Upsample(nn.Module):\n    \"\"\"\n    An upsampling layer with an optional convolution.\n    :param channels: channels in the inputs and outputs.\n    :param use_conv: a bool determining if a convolution is applied.\n    :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then\n                 upsampling occurs in the inner-two dimensions.\n    \"\"\"\n\n    def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):\n        super().__init__()\n        self.channels = channels\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.dims = dims\n        if use_conv:\n            self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding)\n\n    def forward(self, x):\n        assert x.shape[1] == self.channels\n        if self.dims == 3:\n            x = F.interpolate(x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode='nearest')\n        else:\n            x = F.interpolate(x, scale_factor=2, mode='nearest')\n        if self.use_conv:\n            x = self.conv(x)\n        return x\n\n\nclass ResBlock(TimestepBlock):\n    \"\"\"\n    A residual block that can optionally change the number of channels.\n    :param channels: the number of input channels.\n    :param emb_channels: the number of timestep embedding channels.\n    :param dropout: the rate of dropout.\n    :param out_channels: if specified, the number of out channels.\n    :param use_conv: if True and out_channels is specified, use a spatial\n        convolution instead of a smaller 1x1 convolution to change the\n        channels in the skip connection.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param up: if True, use this block for upsampling.\n    :param down: if True, use this block for downsampling.\n    :param use_temporal_conv: if True, use the temporal convolution.\n    :param use_image_dataset: if True, the temporal parameters will not be optimized.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels,\n        emb_channels,\n        dropout,\n        out_channels=None,\n        use_scale_shift_norm=False,\n        dims=2,\n        use_checkpoint=False,\n        use_conv=False,\n        up=False,\n        down=False,\n        use_temporal_conv=False,\n        tempspatial_aware=False\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.dropout = dropout\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n        self.use_temporal_conv = use_temporal_conv\n\n        self.in_layers = nn.Sequential(\n            normalization(channels),\n            nn.SiLU(),\n            conv_nd(dims, channels, self.out_channels, 3, padding=1),\n        )\n\n        self.updown = up or down\n\n        if up:\n            self.h_upd = Upsample(channels, False, dims)\n            self.x_upd = Upsample(channels, False, dims)\n        elif down:\n            self.h_upd = Downsample(channels, False, dims)\n            self.x_upd = Downsample(channels, False, dims)\n        else:\n            self.h_upd = self.x_upd = nn.Identity()\n\n        self.emb_layers = nn.Sequential(\n            nn.SiLU(),\n            nn.Linear(\n                emb_channels,\n                2 * self.out_channels if use_scale_shift_norm else self.out_channels,\n            ),\n        )\n        self.out_layers = nn.Sequential(\n            normalization(self.out_channels),\n            nn.SiLU(),\n            nn.Dropout(p=dropout),\n            zero_module(nn.Conv2d(self.out_channels, self.out_channels, 3, padding=1)),\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = conv_nd(dims, channels, self.out_channels, 3, padding=1)\n        else:\n            self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)\n\n        if self.use_temporal_conv:\n            self.temopral_conv = TemporalConvBlock(\n                self.out_channels,\n                self.out_channels,\n                dropout=0.1,\n                spatial_aware=tempspatial_aware\n            )\n\n    def forward(self, x, emb, batch_size=None):\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        input_tuple = (x, emb)\n        if batch_size:\n            forward_batchsize = partial(self._forward, batch_size=batch_size)\n            return checkpoint(forward_batchsize, input_tuple, self.parameters(), self.use_checkpoint)\n        return checkpoint(self._forward, input_tuple, self.parameters(), self.use_checkpoint)\n\n    def _forward(self, x, emb, batch_size=None):\n        if self.updown:\n            in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]\n            h = in_rest(x)\n            h = self.h_upd(h)\n            x = self.x_upd(x)\n            h = in_conv(h)\n        else:\n            h = self.in_layers(x)\n        emb_out = self.emb_layers(emb).type(h.dtype)\n        while len(emb_out.shape) < len(h.shape):\n            emb_out = emb_out[..., None]\n        if self.use_scale_shift_norm:\n            out_norm, out_rest = self.out_layers[0], self.out_layers[1:]\n            scale, shift = torch.chunk(emb_out, 2, dim=1)\n            h = out_norm(h) * (1 + scale) + shift\n            h = out_rest(h)\n        else:\n            h = h + emb_out\n            h = self.out_layers(h)\n        h = self.skip_connection(x) + h\n\n        if self.use_temporal_conv and batch_size:\n            h = rearrange(h, '(b t) c h w -> b c t h w', b=batch_size)\n            h = self.temopral_conv(h)\n            h = rearrange(h, 'b c t h w -> (b t) c h w')\n        return h\n\n\nclass TemporalConvBlock(nn.Module):\n    \"\"\"\n    Adapted from modelscope: https://github.com/modelscope/modelscope/blob/master/modelscope/models/multi_modal/video_synthesis/unet_sd.py\n    \"\"\"\n    def __init__(self, in_channels, out_channels=None, dropout=0.0, spatial_aware=False):\n        super(TemporalConvBlock, self).__init__()\n        if out_channels is None:\n            out_channels = in_channels\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        th_kernel_shape = (3, 1, 1) if not spatial_aware else (3, 3, 1)\n        th_padding_shape = (1, 0, 0) if not spatial_aware else (1, 1, 0)\n        tw_kernel_shape = (3, 1, 1) if not spatial_aware else (3, 1, 3)\n        tw_padding_shape = (1, 0, 0) if not spatial_aware else (1, 0, 1)\n\n        # conv layers\n        self.conv1 = nn.Sequential(\n            nn.GroupNorm(32, in_channels), nn.SiLU(),\n            nn.Conv3d(in_channels, out_channels, th_kernel_shape, padding=th_padding_shape))\n        self.conv2 = nn.Sequential(\n            nn.GroupNorm(32, out_channels), nn.SiLU(), nn.Dropout(dropout),\n            nn.Conv3d(out_channels, in_channels, tw_kernel_shape, padding=tw_padding_shape))\n        self.conv3 = nn.Sequential(\n            nn.GroupNorm(32, out_channels), nn.SiLU(), nn.Dropout(dropout),\n            nn.Conv3d(out_channels, in_channels, th_kernel_shape, padding=th_padding_shape))\n        self.conv4 = nn.Sequential(\n            nn.GroupNorm(32, out_channels), nn.SiLU(), nn.Dropout(dropout),\n            nn.Conv3d(out_channels, in_channels, tw_kernel_shape, padding=tw_padding_shape))\n\n        # zero out the last layer params,so the conv block is identity\n        nn.init.zeros_(self.conv4[-1].weight)\n        nn.init.zeros_(self.conv4[-1].bias)\n\n    def forward(self, x):\n        identity = x\n        x = self.conv1(x)\n        x = self.conv2(x)\n        x = self.conv3(x)\n        x = self.conv4(x)\n\n        return identity + x\n\nclass UNetModel(nn.Module):\n    \"\"\"\n    The full UNet model with attention and timestep embedding.\n    :param in_channels: in_channels in the input Tensor.\n    :param model_channels: base channel count for the model.\n    :param out_channels: channels in the output Tensor.\n    :param num_res_blocks: number of residual blocks per downsample.\n    :param attention_resolutions: a collection of downsample rates at which\n        attention will take place. May be a set, list, or tuple.\n        For example, if this contains 4, then at 4x downsampling, attention\n        will be used.\n    :param dropout: the dropout probability.\n    :param channel_mult: channel multiplier for each level of the UNet.\n    :param conv_resample: if True, use learned convolutions for upsampling and\n        downsampling.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param num_classes: if specified (as an int), then this model will be\n        class-conditional with `num_classes` classes.\n    :param use_checkpoint: use gradient checkpointing to reduce memory usage.\n    :param num_heads: the number of attention heads in each attention layer.\n    :param num_heads_channels: if specified, ignore num_heads and instead use\n                               a fixed channel width per attention head.\n    :param num_heads_upsample: works with num_heads to set a different number\n                               of heads for upsampling. Deprecated.\n    :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.\n    :param resblock_updown: use residual blocks for up/downsampling.\n    :param use_new_attention_order: use a different attention pattern for potentially\n                                    increased efficiency.\n    \"\"\"\n\n    def __init__(self,\n                 in_channels,\n                 model_channels,\n                 out_channels,\n                 num_res_blocks,\n                 attention_resolutions,\n                 dropout=0.0,\n                 channel_mult=(1, 2, 4, 8),\n                 conv_resample=True,\n                 dims=2,\n                 context_dim=None,\n                 use_scale_shift_norm=False,\n                 resblock_updown=False,\n                 num_heads=-1,\n                 num_head_channels=-1,\n                 transformer_depth=1,\n                 use_linear=False,\n                 use_checkpoint=False,\n                 temporal_conv=False,\n                 tempspatial_aware=False,\n                 temporal_attention=True,\n                 use_relative_position=True,\n                 use_causal_attention=False,\n                 temporal_length=None,\n                 use_fp16=False,\n                 addition_attention=False,\n                 temporal_selfatt_only=True,\n                 image_cross_attention=False,\n                 image_cross_attention_scale_learnable=False,\n                 default_fs=4,\n                 fs_condition=False,\n                ):\n        super(UNetModel, self).__init__()\n        if num_heads == -1:\n            assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'\n        if num_head_channels == -1:\n            assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'\n\n        self.in_channels = in_channels\n        self.model_channels = model_channels\n        self.out_channels = out_channels\n        self.num_res_blocks = num_res_blocks\n        self.attention_resolutions = attention_resolutions\n        self.dropout = dropout\n        self.channel_mult = channel_mult\n        self.conv_resample = conv_resample\n        self.temporal_attention = temporal_attention\n        time_embed_dim = model_channels * 4\n        self.use_checkpoint = use_checkpoint\n        self.dtype = torch.float16 if use_fp16 else torch.float32\n        temporal_self_att_only = True\n        self.addition_attention = addition_attention\n        self.temporal_length = temporal_length\n        self.image_cross_attention = image_cross_attention\n        self.image_cross_attention_scale_learnable = image_cross_attention_scale_learnable\n        self.default_fs = default_fs\n        self.fs_condition = fs_condition\n\n        ## Time embedding blocks\n        self.time_embed = nn.Sequential(\n            linear(model_channels, time_embed_dim),\n            nn.SiLU(),\n            linear(time_embed_dim, time_embed_dim),\n        )\n        if fs_condition:\n            self.fps_embedding = nn.Sequential(\n                linear(model_channels, time_embed_dim),\n                nn.SiLU(),\n                linear(time_embed_dim, time_embed_dim),\n            )\n            nn.init.zeros_(self.fps_embedding[-1].weight)\n            nn.init.zeros_(self.fps_embedding[-1].bias)\n        ## Input Block\n        self.input_blocks = nn.ModuleList(\n            [\n                TimestepEmbedSequential(conv_nd(dims, in_channels, model_channels, 3, padding=1))\n            ]\n        )\n        if self.addition_attention:\n            self.init_attn=TimestepEmbedSequential(\n                TemporalTransformer(\n                    model_channels,\n                    n_heads=8,\n                    d_head=num_head_channels,\n                    depth=transformer_depth,\n                    context_dim=context_dim,\n                    use_checkpoint=use_checkpoint, only_self_att=temporal_selfatt_only, \n                    causal_attention=False, relative_position=use_relative_position, \n                    temporal_length=temporal_length))\n\n        input_block_chans = [model_channels]\n        ch = model_channels\n        ds = 1\n        for level, mult in enumerate(channel_mult):\n            for _ in range(num_res_blocks):\n                layers = [\n                    ResBlock(ch, time_embed_dim, dropout,\n                        out_channels=mult * model_channels, dims=dims, use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm, tempspatial_aware=tempspatial_aware,\n                        use_temporal_conv=temporal_conv\n                    )\n                ]\n                ch = mult * model_channels\n                if ds in attention_resolutions:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    layers.append(\n                        SpatialTransformer(ch, num_heads, dim_head, \n                            depth=transformer_depth, context_dim=context_dim, use_linear=use_linear,\n                            use_checkpoint=use_checkpoint, disable_self_attn=False, \n                            video_length=temporal_length, image_cross_attention=self.image_cross_attention,\n                            image_cross_attention_scale_learnable=self.image_cross_attention_scale_learnable,                      \n                        )\n                    )\n                    if self.temporal_attention:\n                        layers.append(\n                            TemporalTransformer(ch, num_heads, dim_head,\n                                depth=transformer_depth, context_dim=context_dim, use_linear=use_linear,\n                                use_checkpoint=use_checkpoint, only_self_att=temporal_self_att_only, \n                                causal_attention=use_causal_attention, relative_position=use_relative_position, \n                                temporal_length=temporal_length\n                            )\n                        )\n                self.input_blocks.append(TimestepEmbedSequential(*layers))\n                input_block_chans.append(ch)\n            if level != len(channel_mult) - 1:\n                out_ch = ch\n                self.input_blocks.append(\n                    TimestepEmbedSequential(\n                        ResBlock(ch, time_embed_dim, dropout, \n                            out_channels=out_ch, dims=dims, use_checkpoint=use_checkpoint,\n                            use_scale_shift_norm=use_scale_shift_norm,\n                            down=True\n                        )\n                        if resblock_updown\n                        else Downsample(ch, conv_resample, dims=dims, out_channels=out_ch)\n                    )\n                )\n                ch = out_ch\n                input_block_chans.append(ch)\n                ds *= 2\n\n        if num_head_channels == -1:\n            dim_head = ch // num_heads\n        else:\n            num_heads = ch // num_head_channels\n            dim_head = num_head_channels\n        layers = [\n            ResBlock(ch, time_embed_dim, dropout,\n                dims=dims, use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm, tempspatial_aware=tempspatial_aware,\n                use_temporal_conv=temporal_conv\n            ),\n            SpatialTransformer(ch, num_heads, dim_head, \n                depth=transformer_depth, context_dim=context_dim, use_linear=use_linear,\n                use_checkpoint=use_checkpoint, disable_self_attn=False, video_length=temporal_length, \n                image_cross_attention=self.image_cross_attention,image_cross_attention_scale_learnable=self.image_cross_attention_scale_learnable                \n            )\n        ]\n        if self.temporal_attention:\n            layers.append(\n                TemporalTransformer(ch, num_heads, dim_head,\n                    depth=transformer_depth, context_dim=context_dim, use_linear=use_linear,\n                    use_checkpoint=use_checkpoint, only_self_att=temporal_self_att_only, \n                    causal_attention=use_causal_attention, relative_position=use_relative_position, \n                    temporal_length=temporal_length\n                )\n            )\n        layers.append(\n            ResBlock(ch, time_embed_dim, dropout,\n                dims=dims, use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm, tempspatial_aware=tempspatial_aware, \n                use_temporal_conv=temporal_conv\n                )\n        )\n\n        ## Middle Block\n        self.middle_block = TimestepEmbedSequential(*layers)\n\n        ## Output Block\n        self.output_blocks = nn.ModuleList([])\n        for level, mult in list(enumerate(channel_mult))[::-1]:\n            for i in range(num_res_blocks + 1):\n                ich = input_block_chans.pop()\n                layers = [\n                    ResBlock(ch + ich, time_embed_dim, dropout,\n                        out_channels=mult * model_channels, dims=dims, use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm, tempspatial_aware=tempspatial_aware,\n                        use_temporal_conv=temporal_conv\n                    )\n                ]\n                ch = model_channels * mult\n                if ds in attention_resolutions:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    layers.append(\n                        SpatialTransformer(ch, num_heads, dim_head, \n                            depth=transformer_depth, context_dim=context_dim, use_linear=use_linear,\n                            use_checkpoint=use_checkpoint, disable_self_attn=False, video_length=temporal_length,\n                            image_cross_attention=self.image_cross_attention,image_cross_attention_scale_learnable=self.image_cross_attention_scale_learnable    \n                        )\n                    )\n                    if self.temporal_attention:\n                        layers.append(\n                            TemporalTransformer(ch, num_heads, dim_head,\n                                depth=transformer_depth, context_dim=context_dim, use_linear=use_linear,\n                                use_checkpoint=use_checkpoint, only_self_att=temporal_self_att_only, \n                                causal_attention=use_causal_attention, relative_position=use_relative_position, \n                                temporal_length=temporal_length\n                            )\n                        )\n                if level and i == num_res_blocks:\n                    out_ch = ch\n                    layers.append(\n                        ResBlock(ch, time_embed_dim, dropout,\n                            out_channels=out_ch, dims=dims, use_checkpoint=use_checkpoint,\n                            use_scale_shift_norm=use_scale_shift_norm,\n                            up=True\n                        )\n                        if resblock_updown\n                        else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)\n                    )\n                    ds //= 2\n                self.output_blocks.append(TimestepEmbedSequential(*layers))\n\n        self.out = nn.Sequential(\n            normalization(ch),\n            nn.SiLU(),\n            zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),\n        )\n\n    def forward(self, x, timesteps, context=None, features_adapter=None, fs=None, **kwargs):\n        b,_,t,_,_ = x.shape\n        t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False).type(x.dtype)\n        emb = self.time_embed(t_emb)\n        \n        ## repeat t times for context [(b t) 77 768] & time embedding\n        ## check if we use per-frame image conditioning\n        _, l_context, _ = context.shape\n        if l_context == 77 + t*16: ## !!! HARD CODE here\n            context_text, context_img = context[:,:77,:], context[:,77:,:]\n            context_text = context_text.repeat_interleave(repeats=t, dim=0)\n            context_img = rearrange(context_img, 'b (t l) c -> (b t) l c', t=t)\n            context = torch.cat([context_text, context_img], dim=1)\n        else:\n            context = context.repeat_interleave(repeats=t, dim=0)\n        emb = emb.repeat_interleave(repeats=t, dim=0)\n        \n        ## always in shape (b t) c h w, except for temporal layer\n        x = rearrange(x, 'b c t h w -> (b t) c h w')\n\n        ## combine emb\n        if self.fs_condition:\n            if fs is None:\n                fs = torch.tensor(\n                    [self.default_fs] * b, dtype=torch.long, device=x.device)\n            fs_emb = timestep_embedding(fs, self.model_channels, repeat_only=False).type(x.dtype)\n\n            fs_embed = self.fps_embedding(fs_emb)\n            fs_embed = fs_embed.repeat_interleave(repeats=t, dim=0)\n            emb = emb + fs_embed\n        if self.dtype != emb.dtype:\n            self.dtype = emb.dtype\n        h = x.type(self.dtype)\n        adapter_idx = 0\n        hs = []\n        for id, module in enumerate(self.input_blocks):\n            h = module(h, emb, context=context, batch_size=b)\n            if id ==0 and self.addition_attention:\n                h = self.init_attn(h, emb, context=context, batch_size=b)\n            ## plug-in adapter features\n            if ((id+1)%3 == 0) and features_adapter is not None:\n                h = h + features_adapter[adapter_idx]\n                adapter_idx += 1\n            hs.append(h)\n        if features_adapter is not None:\n            assert len(features_adapter)==adapter_idx, 'Wrong features_adapter'\n\n        h = self.middle_block(h, emb, context=context, batch_size=b)\n        for module in self.output_blocks:\n            h = torch.cat([h, hs.pop()], dim=1)\n            h = module(h, emb, context=context, batch_size=b)\n        h = h.type(x.dtype)\n        y = self.out(h)\n        \n        # reshape back to (b c t h w)\n        y = rearrange(y, '(b t) c h w -> b c t h w', b=b)\n        return y"
  },
  {
    "path": "ToonCrafter/lvdm/modules/x_transformer.py",
    "content": "\"\"\"shout-out to https://github.com/lucidrains/x-transformers/tree/main/x_transformers\"\"\"\nfrom functools import partial\nfrom inspect import isfunction\nfrom collections import namedtuple\nfrom einops import rearrange, repeat\nimport torch\nfrom torch import nn, einsum\nimport torch.nn.functional as F\n\n# constants\nDEFAULT_DIM_HEAD = 64\n\nIntermediates = namedtuple('Intermediates', [\n    'pre_softmax_attn',\n    'post_softmax_attn'\n])\n\nLayerIntermediates = namedtuple('Intermediates', [\n    'hiddens',\n    'attn_intermediates'\n])\n\n\nclass AbsolutePositionalEmbedding(nn.Module):\n    def __init__(self, dim, max_seq_len):\n        super().__init__()\n        self.emb = nn.Embedding(max_seq_len, dim)\n        self.init_()\n\n    def init_(self):\n        nn.init.normal_(self.emb.weight, std=0.02)\n\n    def forward(self, x):\n        n = torch.arange(x.shape[1], device=x.device)\n        return self.emb(n)[None, :, :]\n\n\nclass FixedPositionalEmbedding(nn.Module):\n    def __init__(self, dim):\n        super().__init__()\n        inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim))\n        self.register_buffer('inv_freq', inv_freq)\n\n    def forward(self, x, seq_dim=1, offset=0):\n        t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset\n        sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq)\n        emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1)\n        return emb[None, :, :]\n\n\n# helpers\n\ndef exists(val):\n    return val is not None\n\n\ndef default(val, d):\n    if exists(val):\n        return val\n    return d() if isfunction(d) else d\n\n\ndef always(val):\n    def inner(*args, **kwargs):\n        return val\n    return inner\n\n\ndef not_equals(val):\n    def inner(x):\n        return x != val\n    return inner\n\n\ndef equals(val):\n    def inner(x):\n        return x == val\n    return inner\n\n\ndef max_neg_value(tensor):\n    return -torch.finfo(tensor.dtype).max\n\n\n# keyword argument helpers\n\ndef pick_and_pop(keys, d):\n    values = list(map(lambda key: d.pop(key), keys))\n    return dict(zip(keys, values))\n\n\ndef group_dict_by_key(cond, d):\n    return_val = [dict(), dict()]\n    for key in d.keys():\n        match = bool(cond(key))\n        ind = int(not match)\n        return_val[ind][key] = d[key]\n    return (*return_val,)\n\n\ndef string_begins_with(prefix, str):\n    return str.startswith(prefix)\n\n\ndef group_by_key_prefix(prefix, d):\n    return group_dict_by_key(partial(string_begins_with, prefix), d)\n\n\ndef groupby_prefix_and_trim(prefix, d):\n    kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d)\n    kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items())))\n    return kwargs_without_prefix, kwargs\n\n\n# classes\nclass Scale(nn.Module):\n    def __init__(self, value, fn):\n        super().__init__()\n        self.value = value\n        self.fn = fn\n\n    def forward(self, x, **kwargs):\n        x, *rest = self.fn(x, **kwargs)\n        return (x * self.value, *rest)\n\n\nclass Rezero(nn.Module):\n    def __init__(self, fn):\n        super().__init__()\n        self.fn = fn\n        self.g = nn.Parameter(torch.zeros(1))\n\n    def forward(self, x, **kwargs):\n        x, *rest = self.fn(x, **kwargs)\n        return (x * self.g, *rest)\n\n\nclass ScaleNorm(nn.Module):\n    def __init__(self, dim, eps=1e-5):\n        super().__init__()\n        self.scale = dim ** -0.5\n        self.eps = eps\n        self.g = nn.Parameter(torch.ones(1))\n\n    def forward(self, x):\n        norm = torch.norm(x, dim=-1, keepdim=True) * self.scale\n        return x / norm.clamp(min=self.eps) * self.g\n\n\nclass RMSNorm(nn.Module):\n    def __init__(self, dim, eps=1e-8):\n        super().__init__()\n        self.scale = dim ** -0.5\n        self.eps = eps\n        self.g = nn.Parameter(torch.ones(dim))\n\n    def forward(self, x):\n        norm = torch.norm(x, dim=-1, keepdim=True) * self.scale\n        return x / norm.clamp(min=self.eps) * self.g\n\n\nclass Residual(nn.Module):\n    def forward(self, x, residual):\n        return x + residual\n\n\nclass GRUGating(nn.Module):\n    def __init__(self, dim):\n        super().__init__()\n        self.gru = nn.GRUCell(dim, dim)\n\n    def forward(self, x, residual):\n        gated_output = self.gru(\n            rearrange(x, 'b n d -> (b n) d'),\n            rearrange(residual, 'b n d -> (b n) d')\n        )\n\n        return gated_output.reshape_as(x)\n\n\n# feedforward\n\nclass GEGLU(nn.Module):\n    def __init__(self, dim_in, dim_out):\n        super().__init__()\n        self.proj = nn.Linear(dim_in, dim_out * 2)\n\n    def forward(self, x):\n        x, gate = self.proj(x).chunk(2, dim=-1)\n        return x * F.gelu(gate)\n\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):\n        super().__init__()\n        inner_dim = int(dim * mult)\n        dim_out = default(dim_out, dim)\n        project_in = nn.Sequential(\n            nn.Linear(dim, inner_dim),\n            nn.GELU()\n        ) if not glu else GEGLU(dim, inner_dim)\n\n        self.net = nn.Sequential(\n            project_in,\n            nn.Dropout(dropout),\n            nn.Linear(inner_dim, dim_out)\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\n\n# attention.\nclass Attention(nn.Module):\n    def __init__(\n            self,\n            dim,\n            dim_head=DEFAULT_DIM_HEAD,\n            heads=8,\n            causal=False,\n            mask=None,\n            talking_heads=False,\n            sparse_topk=None,\n            use_entmax15=False,\n            num_mem_kv=0,\n            dropout=0.,\n            on_attn=False\n    ):\n        super().__init__()\n        if use_entmax15:\n            raise NotImplementedError(\"Check out entmax activation instead of softmax activation!\")\n        self.scale = dim_head ** -0.5\n        self.heads = heads\n        self.causal = causal\n        self.mask = mask\n\n        inner_dim = dim_head * heads\n\n        self.to_q = nn.Linear(dim, inner_dim, bias=False)\n        self.to_k = nn.Linear(dim, inner_dim, bias=False)\n        self.to_v = nn.Linear(dim, inner_dim, bias=False)\n        self.dropout = nn.Dropout(dropout)\n\n        # talking heads\n        self.talking_heads = talking_heads\n        if talking_heads:\n            self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads))\n            self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads))\n\n        # explicit topk sparse attention\n        self.sparse_topk = sparse_topk\n\n        # entmax\n        #self.attn_fn = entmax15 if use_entmax15 else F.softmax\n        self.attn_fn = F.softmax\n\n        # add memory key / values\n        self.num_mem_kv = num_mem_kv\n        if num_mem_kv > 0:\n            self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))\n            self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))\n\n        # attention on attention\n        self.attn_on_attn = on_attn\n        self.to_out = nn.Sequential(nn.Linear(inner_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(inner_dim, dim)\n\n    def forward(\n            self,\n            x,\n            context=None,\n            mask=None,\n            context_mask=None,\n            rel_pos=None,\n            sinusoidal_emb=None,\n            prev_attn=None,\n            mem=None\n    ):\n        b, n, _, h, talking_heads, device = *x.shape, self.heads, self.talking_heads, x.device\n        kv_input = default(context, x)\n\n        q_input = x\n        k_input = kv_input\n        v_input = kv_input\n\n        if exists(mem):\n            k_input = torch.cat((mem, k_input), dim=-2)\n            v_input = torch.cat((mem, v_input), dim=-2)\n\n        if exists(sinusoidal_emb):\n            # in shortformer, the query would start at a position offset depending on the past cached memory\n            offset = k_input.shape[-2] - q_input.shape[-2]\n            q_input = q_input + sinusoidal_emb(q_input, offset=offset)\n            k_input = k_input + sinusoidal_emb(k_input)\n\n        q = self.to_q(q_input)\n        k = self.to_k(k_input)\n        v = self.to_v(v_input)\n\n        q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v))\n\n        input_mask = None\n        if any(map(exists, (mask, context_mask))):\n            q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool())\n            k_mask = q_mask if not exists(context) else context_mask\n            k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool())\n            q_mask = rearrange(q_mask, 'b i -> b () i ()')\n            k_mask = rearrange(k_mask, 'b j -> b () () j')\n            input_mask = q_mask * k_mask\n\n        if self.num_mem_kv > 0:\n            mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v))\n            k = torch.cat((mem_k, k), dim=-2)\n            v = torch.cat((mem_v, v), dim=-2)\n            if exists(input_mask):\n                input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True)\n\n        dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale\n        mask_value = max_neg_value(dots)\n\n        if exists(prev_attn):\n            dots = dots + prev_attn\n\n        pre_softmax_attn = dots\n\n        if talking_heads:\n            dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous()\n\n        if exists(rel_pos):\n            dots = rel_pos(dots)\n\n        if exists(input_mask):\n            dots.masked_fill_(~input_mask, mask_value)\n            del input_mask\n\n        if self.causal:\n            i, j = dots.shape[-2:]\n            r = torch.arange(i, device=device)\n            mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j')\n            mask = F.pad(mask, (j - i, 0), value=False)\n            dots.masked_fill_(mask, mask_value)\n            del mask\n\n        if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]:\n            top, _ = dots.topk(self.sparse_topk, dim=-1)\n            vk = top[..., -1].unsqueeze(-1).expand_as(dots)\n            mask = dots < vk\n            dots.masked_fill_(mask, mask_value)\n            del mask\n\n        attn = self.attn_fn(dots, dim=-1)\n        post_softmax_attn = attn\n\n        attn = self.dropout(attn)\n\n        if talking_heads:\n            attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous()\n\n        out = einsum('b h i j, b h j d -> b h i d', attn, v)\n        out = rearrange(out, 'b h n d -> b n (h d)')\n\n        intermediates = Intermediates(\n            pre_softmax_attn=pre_softmax_attn,\n            post_softmax_attn=post_softmax_attn\n        )\n\n        return self.to_out(out), intermediates\n\n\nclass AttentionLayers(nn.Module):\n    def __init__(\n            self,\n            dim,\n            depth,\n            heads=8,\n            causal=False,\n            cross_attend=False,\n            only_cross=False,\n            use_scalenorm=False,\n            use_rmsnorm=False,\n            use_rezero=False,\n            rel_pos_num_buckets=32,\n            rel_pos_max_distance=128,\n            position_infused_attn=False,\n            custom_layers=None,\n            sandwich_coef=None,\n            par_ratio=None,\n            residual_attn=False,\n            cross_residual_attn=False,\n            macaron=False,\n            pre_norm=True,\n            gate_residual=False,\n            **kwargs\n    ):\n        super().__init__()\n        ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs)\n        attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs)\n\n        dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD)\n\n        self.dim = dim\n        self.depth = depth\n        self.layers = nn.ModuleList([])\n\n        self.has_pos_emb = position_infused_attn\n        self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None\n        self.rotary_pos_emb = always(None)\n\n        assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance'\n        self.rel_pos = None\n\n        self.pre_norm = pre_norm\n\n        self.residual_attn = residual_attn\n        self.cross_residual_attn = cross_residual_attn\n\n        norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm\n        norm_class = RMSNorm if use_rmsnorm else norm_class\n        norm_fn = partial(norm_class, dim)\n\n        norm_fn = nn.Identity if use_rezero else norm_fn\n        branch_fn = Rezero if use_rezero else None\n\n        if cross_attend and not only_cross:\n            default_block = ('a', 'c', 'f')\n        elif cross_attend and only_cross:\n            default_block = ('c', 'f')\n        else:\n            default_block = ('a', 'f')\n\n        if macaron:\n            default_block = ('f',) + default_block\n\n        if exists(custom_layers):\n            layer_types = custom_layers\n        elif exists(par_ratio):\n            par_depth = depth * len(default_block)\n            assert 1 < par_ratio <= par_depth, 'par ratio out of range'\n            default_block = tuple(filter(not_equals('f'), default_block))\n            par_attn = par_depth // par_ratio\n            depth_cut = par_depth * 2 // 3  # 2 / 3 attention layer cutoff suggested by PAR paper\n            par_width = (depth_cut + depth_cut // par_attn) // par_attn\n            assert len(default_block) <= par_width, 'default block is too large for par_ratio'\n            par_block = default_block + ('f',) * (par_width - len(default_block))\n            par_head = par_block * par_attn\n            layer_types = par_head + ('f',) * (par_depth - len(par_head))\n        elif exists(sandwich_coef):\n            assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth'\n            layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef\n        else:\n            layer_types = default_block * depth\n\n        self.layer_types = layer_types\n        self.num_attn_layers = len(list(filter(equals('a'), layer_types)))\n\n        for layer_type in self.layer_types:\n            if layer_type == 'a':\n                layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs)\n            elif layer_type == 'c':\n                layer = Attention(dim, heads=heads, **attn_kwargs)\n            elif layer_type == 'f':\n                layer = FeedForward(dim, **ff_kwargs)\n                layer = layer if not macaron else Scale(0.5, layer)\n            else:\n                raise Exception(f'invalid layer type {layer_type}')\n\n            if isinstance(layer, Attention) and exists(branch_fn):\n                layer = branch_fn(layer)\n\n            if gate_residual:\n                residual_fn = GRUGating(dim)\n            else:\n                residual_fn = Residual()\n\n            self.layers.append(nn.ModuleList([\n                norm_fn(),\n                layer,\n                residual_fn\n            ]))\n\n    def forward(\n            self,\n            x,\n            context=None,\n            mask=None,\n            context_mask=None,\n            mems=None,\n            return_hiddens=False\n    ):\n        hiddens = []\n        intermediates = []\n        prev_attn = None\n        prev_cross_attn = None\n\n        mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers\n\n        for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)):\n            is_last = ind == (len(self.layers) - 1)\n\n            if layer_type == 'a':\n                hiddens.append(x)\n                layer_mem = mems.pop(0)\n\n            residual = x\n\n            if self.pre_norm:\n                x = norm(x)\n\n            if layer_type == 'a':\n                out, inter = block(x, mask=mask, sinusoidal_emb=self.pia_pos_emb, rel_pos=self.rel_pos,\n                                   prev_attn=prev_attn, mem=layer_mem)\n            elif layer_type == 'c':\n                out, inter = block(x, context=context, mask=mask, context_mask=context_mask, prev_attn=prev_cross_attn)\n            elif layer_type == 'f':\n                out = block(x)\n\n            x = residual_fn(out, residual)\n\n            if layer_type in ('a', 'c'):\n                intermediates.append(inter)\n\n            if layer_type == 'a' and self.residual_attn:\n                prev_attn = inter.pre_softmax_attn\n            elif layer_type == 'c' and self.cross_residual_attn:\n                prev_cross_attn = inter.pre_softmax_attn\n\n            if not self.pre_norm and not is_last:\n                x = norm(x)\n\n        if return_hiddens:\n            intermediates = LayerIntermediates(\n                hiddens=hiddens,\n                attn_intermediates=intermediates\n            )\n\n            return x, intermediates\n\n        return x\n\n\nclass Encoder(AttentionLayers):\n    def __init__(self, **kwargs):\n        assert 'causal' not in kwargs, 'cannot set causality on encoder'\n        super().__init__(causal=False, **kwargs)\n\n\n\nclass TransformerWrapper(nn.Module):\n    def __init__(\n            self,\n            *,\n            num_tokens,\n            max_seq_len,\n            attn_layers,\n            emb_dim=None,\n            max_mem_len=0.,\n            emb_dropout=0.,\n            num_memory_tokens=None,\n            tie_embedding=False,\n            use_pos_emb=True\n    ):\n        super().__init__()\n        assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder'\n\n        dim = attn_layers.dim\n        emb_dim = default(emb_dim, dim)\n\n        self.max_seq_len = max_seq_len\n        self.max_mem_len = max_mem_len\n        self.num_tokens = num_tokens\n\n        self.token_emb = nn.Embedding(num_tokens, emb_dim)\n        self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if (\n                    use_pos_emb and not attn_layers.has_pos_emb) else always(0)\n        self.emb_dropout = nn.Dropout(emb_dropout)\n\n        self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity()\n        self.attn_layers = attn_layers\n        self.norm = nn.LayerNorm(dim)\n\n        self.init_()\n\n        self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t()\n\n        # memory tokens (like [cls]) from Memory Transformers paper\n        num_memory_tokens = default(num_memory_tokens, 0)\n        self.num_memory_tokens = num_memory_tokens\n        if num_memory_tokens > 0:\n            self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim))\n\n            # let funnel encoder know number of memory tokens, if specified\n            if hasattr(attn_layers, 'num_memory_tokens'):\n                attn_layers.num_memory_tokens = num_memory_tokens\n\n    def init_(self):\n        nn.init.normal_(self.token_emb.weight, std=0.02)\n\n    def forward(\n            self,\n            x,\n            return_embeddings=False,\n            mask=None,\n            return_mems=False,\n            return_attn=False,\n            mems=None,\n            **kwargs\n    ):\n        b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens\n        x = self.token_emb(x)\n        x += self.pos_emb(x)\n        x = self.emb_dropout(x)\n\n        x = self.project_emb(x)\n\n        if num_mem > 0:\n            mem = repeat(self.memory_tokens, 'n d -> b n d', b=b)\n            x = torch.cat((mem, x), dim=1)\n\n            # auto-handle masking after appending memory tokens\n            if exists(mask):\n                mask = F.pad(mask, (num_mem, 0), value=True)\n\n        x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs)\n        x = self.norm(x)\n\n        mem, x = x[:, :num_mem], x[:, num_mem:]\n\n        out = self.to_logits(x) if not return_embeddings else x\n\n        if return_mems:\n            hiddens = intermediates.hiddens\n            new_mems = list(map(lambda pair: torch.cat(pair, dim=-2), zip(mems, hiddens))) if exists(mems) else hiddens\n            new_mems = list(map(lambda t: t[..., -self.max_mem_len:, :].detach(), new_mems))\n            return out, new_mems\n\n        if return_attn:\n            attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))\n            return out, attn_maps\n\n        return out"
  },
  {
    "path": "ToonCrafter/main/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/main/callbacks.py",
    "content": "import os\nimport time\nimport logging\nmainlogger = logging.getLogger('mainlogger')\n\nimport torch\nimport torchvision\nimport pytorch_lightning as pl\nfrom pytorch_lightning.callbacks import Callback\nfrom pytorch_lightning.utilities import rank_zero_only\nfrom pytorch_lightning.utilities import rank_zero_info\nfrom utils.save_video import log_local, prepare_to_log\n\n\nclass ImageLogger(Callback):\n    def __init__(self, batch_frequency, max_images=8, clamp=True, rescale=True, save_dir=None, \\\n                to_local=False, log_images_kwargs=None):\n        super().__init__()\n        self.rescale = rescale\n        self.batch_freq = batch_frequency\n        self.max_images = max_images\n        self.to_local = to_local\n        self.clamp = clamp\n        self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}\n        if self.to_local:\n            ## default save dir\n            self.save_dir = os.path.join(save_dir, \"images\")\n            os.makedirs(os.path.join(self.save_dir, \"train\"), exist_ok=True)\n            os.makedirs(os.path.join(self.save_dir, \"val\"), exist_ok=True)\n\n    def log_to_tensorboard(self, pl_module, batch_logs, filename, split, save_fps=8):\n        \"\"\" log images and videos to tensorboard \"\"\"        \n        global_step = pl_module.global_step\n        for key in batch_logs:\n            value = batch_logs[key]\n            tag = \"gs%d-%s/%s-%s\"%(global_step, split, filename, key)\n            if isinstance(value, list) and isinstance(value[0], str):\n                captions = ' |------| '.join(value)\n                pl_module.logger.experiment.add_text(tag, captions, global_step=global_step)\n            elif isinstance(value, torch.Tensor) and value.dim() == 5:\n                video = value\n                n = video.shape[0]\n                video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w\n                frame_grids = [torchvision.utils.make_grid(framesheet, nrow=int(n), padding=0) for framesheet in video] #[3, n*h, 1*w]\n                grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [t, 3, n*h, w]\n                grid = (grid + 1.0) / 2.0\n                grid = grid.unsqueeze(dim=0)\n                pl_module.logger.experiment.add_video(tag, grid, fps=save_fps, global_step=global_step)\n            elif isinstance(value, torch.Tensor) and value.dim() == 4:\n                img = value\n                grid = torchvision.utils.make_grid(img, nrow=int(n), padding=0)\n                grid = (grid + 1.0) / 2.0  # -1,1 -> 0,1; c,h,w\n                pl_module.logger.experiment.add_image(tag, grid, global_step=global_step)\n            else:\n                pass\n\n    @rank_zero_only\n    def log_batch_imgs(self, pl_module, batch, batch_idx, split=\"train\"):\n        \"\"\" generate images, then save and log to tensorboard \"\"\"\n        skip_freq = self.batch_freq if split == \"train\" else 5\n        if (batch_idx+1) % skip_freq == 0:\n            is_train = pl_module.training\n            if is_train:\n                pl_module.eval()\n            torch.cuda.empty_cache()\n            with torch.no_grad():\n                log_func = pl_module.log_images\n                batch_logs = log_func(batch, split=split, **self.log_images_kwargs)\n            \n            ## process: move to CPU and clamp\n            batch_logs = prepare_to_log(batch_logs, self.max_images, self.clamp)\n            torch.cuda.empty_cache()\n            \n            filename = \"ep{}_idx{}_rank{}\".format(\n                pl_module.current_epoch,\n                batch_idx,\n                pl_module.global_rank)\n            if self.to_local:\n                mainlogger.info(\"Log [%s] batch <%s> to local ...\"%(split, filename))\n                filename = \"gs{}_\".format(pl_module.global_step) + filename\n                log_local(batch_logs, os.path.join(self.save_dir, split), filename, save_fps=10)\n            else:\n                mainlogger.info(\"Log [%s] batch <%s> to tensorboard ...\"%(split, filename))\n                self.log_to_tensorboard(pl_module, batch_logs, filename, split, save_fps=10)\n            mainlogger.info('Finish!')\n\n            if is_train:\n                pl_module.train()\n\n    def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=None):\n        if self.batch_freq != -1 and pl_module.logdir:\n            self.log_batch_imgs(pl_module, batch, batch_idx, split=\"train\")\n\n    def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=None):\n        ## different with validation_step() that saving the whole validation set and only keep the latest,\n        ## it records the performance of every validation (without overwritten) by only keep a subset\n        if self.batch_freq != -1 and pl_module.logdir:\n            self.log_batch_imgs(pl_module, batch, batch_idx, split=\"val\")\n        if hasattr(pl_module, 'calibrate_grad_norm'):\n            if (pl_module.calibrate_grad_norm and batch_idx % 25 == 0) and batch_idx > 0:\n                self.log_gradients(trainer, pl_module, batch_idx=batch_idx)\n\n\nclass CUDACallback(Callback):\n    # see https://github.com/SeanNaren/minGPT/blob/master/mingpt/callback.py\n    def on_train_epoch_start(self, trainer, pl_module):\n        # Reset the memory use counter\n        # lightning update\n        if int((pl.__version__).split('.')[1])>=7:\n            gpu_index = trainer.strategy.root_device.index\n        else:\n            gpu_index = trainer.root_gpu\n        torch.cuda.reset_peak_memory_stats(gpu_index)\n        torch.cuda.synchronize(gpu_index)\n        self.start_time = time.time()\n\n    def on_train_epoch_end(self, trainer, pl_module):\n        if int((pl.__version__).split('.')[1])>=7:\n            gpu_index = trainer.strategy.root_device.index\n        else:\n            gpu_index = trainer.root_gpu\n        torch.cuda.synchronize(gpu_index)\n        max_memory = torch.cuda.max_memory_allocated(gpu_index) / 2 ** 20\n        epoch_time = time.time() - self.start_time\n\n        try:\n            max_memory = trainer.training_type_plugin.reduce(max_memory)\n            epoch_time = trainer.training_type_plugin.reduce(epoch_time)\n\n            rank_zero_info(f\"Average Epoch time: {epoch_time:.2f} seconds\")\n            rank_zero_info(f\"Average Peak memory {max_memory:.2f}MiB\")\n        except AttributeError:\n            pass\n"
  },
  {
    "path": "ToonCrafter/main/trainer.py",
    "content": "import argparse, os, sys, datetime\nfrom omegaconf import OmegaConf\nfrom transformers import logging as transf_logging\nimport pytorch_lightning as pl\nfrom pytorch_lightning import seed_everything\nfrom pytorch_lightning.trainer import Trainer\nimport torch\nsys.path.insert(1, os.path.join(sys.path[0], '..'))\nfrom ToonCrafter.utils.utils import instantiate_from_config\nfrom utils_train import get_trainer_callbacks, get_trainer_logger, get_trainer_strategy\nfrom utils_train import set_logger, init_workspace, load_checkpoints\n\n\ndef get_parser(**parser_kwargs):\n    parser = argparse.ArgumentParser(**parser_kwargs)\n    parser.add_argument(\"--seed\", \"-s\", type=int, default=20230211, help=\"seed for seed_everything\")\n    parser.add_argument(\"--name\", \"-n\", type=str, default=\"\", help=\"experiment name, as saving folder\")\n\n    parser.add_argument(\"--base\", \"-b\", nargs=\"*\", metavar=\"base_config.yaml\", help=\"paths to base configs. Loaded from left-to-right. \"\n                            \"Parameters can be overwritten or added with command-line options of the form `--key value`.\", default=list())\n    \n    parser.add_argument(\"--train\", \"-t\", action='store_true', default=False, help='train')\n    parser.add_argument(\"--val\", \"-v\", action='store_true', default=False, help='val')\n    parser.add_argument(\"--test\", action='store_true', default=False, help='test')\n\n    parser.add_argument(\"--logdir\", \"-l\", type=str, default=\"logs\", help=\"directory for logging dat shit\")\n    parser.add_argument(\"--auto_resume\", action='store_true', default=False, help=\"resume from full-info checkpoint\")\n    parser.add_argument(\"--auto_resume_weight_only\", action='store_true', default=False, help=\"resume from weight-only checkpoint\")\n    parser.add_argument(\"--debug\", \"-d\", action='store_true', default=False, help=\"enable post-mortem debugging\")\n\n    return parser\n    \ndef get_nondefault_trainer_args(args):\n    parser = argparse.ArgumentParser()\n    parser = Trainer.add_argparse_args(parser)\n    default_trainer_args = parser.parse_args([])\n    return sorted(k for k in vars(default_trainer_args) if getattr(args, k) != getattr(default_trainer_args, k))\n\n\nif __name__ == \"__main__\":\n    now = datetime.datetime.now().strftime(\"%Y-%m-%dT%H-%M-%S\")\n    local_rank = int(os.environ.get('LOCAL_RANK'))\n    global_rank = int(os.environ.get('RANK'))\n    num_rank = int(os.environ.get('WORLD_SIZE'))\n\n    parser = get_parser()\n    ## Extends existing argparse by default Trainer attributes\n    parser = Trainer.add_argparse_args(parser)\n    args, unknown = parser.parse_known_args()\n    ## disable transformer warning\n    transf_logging.set_verbosity_error()\n    seed_everything(args.seed)\n\n    ## yaml configs: \"model\" | \"data\" | \"lightning\"\n    configs = [OmegaConf.load(cfg) for cfg in args.base]\n    cli = OmegaConf.from_dotlist(unknown)\n    config = OmegaConf.merge(*configs, cli)\n    lightning_config = config.pop(\"lightning\", OmegaConf.create())\n    trainer_config = lightning_config.get(\"trainer\", OmegaConf.create()) \n\n    ## setup workspace directories\n    workdir, ckptdir, cfgdir, loginfo = init_workspace(args.name, args.logdir, config, lightning_config, global_rank)\n    logger = set_logger(logfile=os.path.join(loginfo, 'log_%d:%s.txt'%(global_rank, now)))\n    logger.info(\"@lightning version: %s [>=1.8 required]\"%(pl.__version__))  \n\n    ## MODEL CONFIG >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n    logger.info(\"***** Configing Model *****\")\n    config.model.params.logdir = workdir\n    model = instantiate_from_config(config.model)\n\n    ## load checkpoints\n    model = load_checkpoints(model, config.model)\n\n    ## register_schedule again to make ZTSNR work\n    if model.rescale_betas_zero_snr:\n        model.register_schedule(given_betas=model.given_betas, beta_schedule=model.beta_schedule, timesteps=model.timesteps,\n                                linear_start=model.linear_start, linear_end=model.linear_end, cosine_s=model.cosine_s)\n\n    ## update trainer config\n    for k in get_nondefault_trainer_args(args):\n        trainer_config[k] = getattr(args, k)\n        \n    num_nodes = trainer_config.num_nodes\n    ngpu_per_node = trainer_config.devices\n    logger.info(f\"Running on {num_rank}={num_nodes}x{ngpu_per_node} GPUs\")\n\n    ## setup learning rate\n    base_lr = config.model.base_learning_rate\n    bs = config.data.params.batch_size\n    if getattr(config.model, 'scale_lr', True):\n        model.learning_rate = num_rank * bs * base_lr\n    else:\n        model.learning_rate = base_lr\n\n\n    ## DATA CONFIG >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n    logger.info(\"***** Configing Data *****\")\n    data = instantiate_from_config(config.data)\n    data.setup()\n    for k in data.datasets:\n        logger.info(f\"{k}, {data.datasets[k].__class__.__name__}, {len(data.datasets[k])}\")\n\n\n    ## TRAINER CONFIG >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n    logger.info(\"***** Configing Trainer *****\")\n    if \"accelerator\" not in trainer_config:\n        trainer_config[\"accelerator\"] = \"gpu\"\n\n    ## setup trainer args: pl-logger and callbacks\n    trainer_kwargs = dict()\n    trainer_kwargs[\"num_sanity_val_steps\"] = 0\n    logger_cfg = get_trainer_logger(lightning_config, workdir, args.debug)\n    trainer_kwargs[\"logger\"] = instantiate_from_config(logger_cfg)\n    \n    ## setup callbacks\n    callbacks_cfg = get_trainer_callbacks(lightning_config, config, workdir, ckptdir, logger)\n    trainer_kwargs[\"callbacks\"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg]\n    strategy_cfg = get_trainer_strategy(lightning_config)\n    trainer_kwargs[\"strategy\"] = strategy_cfg if type(strategy_cfg) == str else instantiate_from_config(strategy_cfg)\n    trainer_kwargs['precision'] = lightning_config.get('precision', 32)\n    trainer_kwargs[\"sync_batchnorm\"] = False\n\n    ## trainer config: others\n\n    trainer_args = argparse.Namespace(**trainer_config)\n    trainer = Trainer.from_argparse_args(trainer_args, **trainer_kwargs)\n\n    ## allow checkpointing via USR1\n    def melk(*args, **kwargs):\n        ## run all checkpoint hooks\n        if trainer.global_rank == 0:\n            print(\"Summoning checkpoint.\")\n            ckpt_path = os.path.join(ckptdir, \"last_summoning.ckpt\")\n            trainer.save_checkpoint(ckpt_path)\n\n    def divein(*args, **kwargs):\n        if trainer.global_rank == 0:\n            import pudb;\n            pudb.set_trace()\n\n    import signal\n    signal.signal(signal.SIGUSR1, melk)\n    signal.signal(signal.SIGUSR2, divein)\n\n    ## Running LOOP >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n    logger.info(\"***** Running the Loop *****\")\n    if args.train:\n        try:\n            if \"strategy\" in lightning_config and lightning_config['strategy'].startswith('deepspeed'):\n                logger.info(\"<Training in DeepSpeed Mode>\")\n                ## deepspeed\n                if trainer_kwargs['precision'] == 16:\n                    with torch.cuda.amp.autocast():\n                        trainer.fit(model, data)\n                else:\n                    trainer.fit(model, data)\n            else:\n                logger.info(\"<Training in DDPSharded Mode>\") ## this is default\n                ## ddpsharded\n                trainer.fit(model, data)\n        except Exception:\n            #melk()\n            raise\n\n    # if args.val:\n    #     trainer.validate(model, data)\n    # if args.test or not trainer.interrupted:\n    #     trainer.test(model, data)"
  },
  {
    "path": "ToonCrafter/main/utils_data.py",
    "content": "from functools import partial\nimport numpy as np\n\nimport torch\nimport pytorch_lightning as pl\nfrom torch.utils.data import DataLoader, Dataset\n\nimport os, sys\nos.chdir(sys.path[0])\nsys.path.append(\"..\")\nfrom lvdm.data.base import Txt2ImgIterableBaseDataset\nfrom ToonCrafter.utils.utils import instantiate_from_config\n\n\ndef worker_init_fn(_):\n    worker_info = torch.utils.data.get_worker_info()\n\n    dataset = worker_info.dataset\n    worker_id = worker_info.id\n\n    if isinstance(dataset, Txt2ImgIterableBaseDataset):\n        split_size = dataset.num_records // worker_info.num_workers\n        # reset num_records to the true number to retain reliable length information\n        dataset.sample_ids = dataset.valid_ids[worker_id * split_size:(worker_id + 1) * split_size]\n        current_id = np.random.choice(len(np.random.get_state()[1]), 1)\n        return np.random.seed(np.random.get_state()[1][current_id] + worker_id)\n    else:\n        return np.random.seed(np.random.get_state()[1][0] + worker_id)\n\n\nclass WrappedDataset(Dataset):\n    \"\"\"Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset\"\"\"\n\n    def __init__(self, dataset):\n        self.data = dataset\n\n    def __len__(self):\n        return len(self.data)\n\n    def __getitem__(self, idx):\n        return self.data[idx]\n\n\nclass DataModuleFromConfig(pl.LightningDataModule):\n    def __init__(self, batch_size, train=None, validation=None, test=None, predict=None,\n                 wrap=False, num_workers=None, shuffle_test_loader=False, use_worker_init_fn=False,\n                 shuffle_val_dataloader=False, train_img=None,\n                 test_max_n_samples=None):\n        super().__init__()\n        self.batch_size = batch_size\n        self.dataset_configs = dict()\n        self.num_workers = num_workers if num_workers is not None else batch_size * 2\n        self.use_worker_init_fn = use_worker_init_fn\n        if train is not None:\n            self.dataset_configs[\"train\"] = train\n            self.train_dataloader = self._train_dataloader\n        if validation is not None:\n            self.dataset_configs[\"validation\"] = validation\n            self.val_dataloader = partial(self._val_dataloader, shuffle=shuffle_val_dataloader)\n        if test is not None:\n            self.dataset_configs[\"test\"] = test\n            self.test_dataloader = partial(self._test_dataloader, shuffle=shuffle_test_loader)\n        if predict is not None:\n            self.dataset_configs[\"predict\"] = predict\n            self.predict_dataloader = self._predict_dataloader\n\n        self.img_loader = None\n        self.wrap = wrap\n        self.test_max_n_samples = test_max_n_samples\n        self.collate_fn = None\n\n    def prepare_data(self):\n        pass\n\n    def setup(self, stage=None):\n        self.datasets = dict((k, instantiate_from_config(self.dataset_configs[k])) for k in self.dataset_configs)\n        if self.wrap:\n            for k in self.datasets:\n                self.datasets[k] = WrappedDataset(self.datasets[k])\n\n    def _train_dataloader(self):\n        is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)\n        if is_iterable_dataset or self.use_worker_init_fn:\n            init_fn = worker_init_fn\n        else:\n            init_fn = None\n        loader = DataLoader(self.datasets[\"train\"], batch_size=self.batch_size,\n                          num_workers=self.num_workers, shuffle=False if is_iterable_dataset else True,\n                          worker_init_fn=init_fn, collate_fn=self.collate_fn,\n                          )\n        return loader\n\n    def _val_dataloader(self, shuffle=False):\n        if isinstance(self.datasets['validation'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:\n            init_fn = worker_init_fn\n        else:\n            init_fn = None\n        return DataLoader(self.datasets[\"validation\"],\n                          batch_size=self.batch_size,\n                          num_workers=self.num_workers,\n                          worker_init_fn=init_fn,\n                          shuffle=shuffle, \n                          collate_fn=self.collate_fn,\n                          )\n\n    def _test_dataloader(self, shuffle=False):\n        try:\n            is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)\n        except:\n            is_iterable_dataset = isinstance(self.datasets['test'], Txt2ImgIterableBaseDataset)\n\n        if is_iterable_dataset or self.use_worker_init_fn:\n            init_fn = worker_init_fn\n        else:\n            init_fn = None\n\n        # do not shuffle dataloader for iterable dataset\n        shuffle = shuffle and (not is_iterable_dataset)\n        if self.test_max_n_samples is not None:\n            dataset = torch.utils.data.Subset(self.datasets[\"test\"], list(range(self.test_max_n_samples)))\n        else:\n            dataset = self.datasets[\"test\"]\n        return DataLoader(dataset, batch_size=self.batch_size,\n                          num_workers=self.num_workers, worker_init_fn=init_fn, shuffle=shuffle,\n                          collate_fn=self.collate_fn,\n                          )\n\n    def _predict_dataloader(self, shuffle=False):\n        if isinstance(self.datasets['predict'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:\n            init_fn = worker_init_fn\n        else:\n            init_fn = None\n        return DataLoader(self.datasets[\"predict\"], batch_size=self.batch_size,\n                          num_workers=self.num_workers, worker_init_fn=init_fn,\n                          collate_fn=self.collate_fn,\n                          )\n"
  },
  {
    "path": "ToonCrafter/main/utils_train.py",
    "content": "import os, re\nfrom omegaconf import OmegaConf\nimport logging\nmainlogger = logging.getLogger('mainlogger')\n\nimport torch\nfrom collections import OrderedDict\n\ndef init_workspace(name, logdir, model_config, lightning_config, rank=0):\n    workdir = os.path.join(logdir, name)\n    ckptdir = os.path.join(workdir, \"checkpoints\")\n    cfgdir = os.path.join(workdir, \"configs\")\n    loginfo = os.path.join(workdir, \"loginfo\")\n\n    # Create logdirs and save configs (all ranks will do to avoid missing directory error if rank:0 is slower)\n    os.makedirs(workdir, exist_ok=True)\n    os.makedirs(ckptdir, exist_ok=True)\n    os.makedirs(cfgdir, exist_ok=True)\n    os.makedirs(loginfo, exist_ok=True)\n\n    if rank == 0:\n        if \"callbacks\" in lightning_config and 'metrics_over_trainsteps_checkpoint' in lightning_config.callbacks:\n            os.makedirs(os.path.join(ckptdir, 'trainstep_checkpoints'), exist_ok=True)\n        OmegaConf.save(model_config, os.path.join(cfgdir, \"model.yaml\"))\n        OmegaConf.save(OmegaConf.create({\"lightning\": lightning_config}), os.path.join(cfgdir, \"lightning.yaml\"))\n    return workdir, ckptdir, cfgdir, loginfo\n\ndef check_config_attribute(config, name):\n    if name in config:\n        value = getattr(config, name)\n        return value\n    else:\n        return None\n\ndef get_trainer_callbacks(lightning_config, config, logdir, ckptdir, logger):\n    default_callbacks_cfg = {\n        \"model_checkpoint\": {\n            \"target\": \"pytorch_lightning.callbacks.ModelCheckpoint\",\n            \"params\": {\n                \"dirpath\": ckptdir,\n                \"filename\": \"{epoch}\",\n                \"verbose\": True,\n                \"save_last\": False,\n            }\n        },\n        \"batch_logger\": {\n            \"target\": \"callbacks.ImageLogger\",\n            \"params\": {\n                \"save_dir\": logdir,\n                \"batch_frequency\": 1000,\n                \"max_images\": 4,\n                \"clamp\": True,\n            }\n        },    \n        \"learning_rate_logger\": {\n            \"target\": \"pytorch_lightning.callbacks.LearningRateMonitor\",\n            \"params\": {\n                \"logging_interval\": \"step\",\n                \"log_momentum\": False\n            }\n        },\n        \"cuda_callback\": {\n            \"target\": \"callbacks.CUDACallback\"\n        },\n    }\n\n    ## optional setting for saving checkpoints\n    monitor_metric = check_config_attribute(config.model.params, \"monitor\")\n    if monitor_metric is not None:\n        mainlogger.info(f\"Monitoring {monitor_metric} as checkpoint metric.\")\n        default_callbacks_cfg[\"model_checkpoint\"][\"params\"][\"monitor\"] = monitor_metric\n        default_callbacks_cfg[\"model_checkpoint\"][\"params\"][\"save_top_k\"] = 3\n        default_callbacks_cfg[\"model_checkpoint\"][\"params\"][\"mode\"] = \"min\"\n\n    if 'metrics_over_trainsteps_checkpoint' in lightning_config.callbacks:\n        mainlogger.info('Caution: Saving checkpoints every n train steps without deleting. This might require some free space.')\n        default_metrics_over_trainsteps_ckpt_dict = {\n            'metrics_over_trainsteps_checkpoint': {\"target\": 'pytorch_lightning.callbacks.ModelCheckpoint',\n                                                   'params': {\n                                                        \"dirpath\": os.path.join(ckptdir, 'trainstep_checkpoints'),\n                                                        \"filename\": \"{epoch}-{step}\",\n                                                        \"verbose\": True,\n                                                        'save_top_k': -1,\n                                                        'every_n_train_steps': 10000,\n                                                        'save_weights_only': True\n                                                    }\n                                                }\n        }\n        default_callbacks_cfg.update(default_metrics_over_trainsteps_ckpt_dict)\n\n    if \"callbacks\" in lightning_config:\n        callbacks_cfg = lightning_config.callbacks\n    else:\n        callbacks_cfg = OmegaConf.create()\n    callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg)\n\n    return callbacks_cfg\n\ndef get_trainer_logger(lightning_config, logdir, on_debug):\n    default_logger_cfgs = {\n        \"tensorboard\": {\n            \"target\": \"pytorch_lightning.loggers.TensorBoardLogger\",\n            \"params\": {\n                \"save_dir\": logdir,\n                \"name\": \"tensorboard\",\n            }\n        },\n        \"testtube\": {\n            \"target\": \"pytorch_lightning.loggers.CSVLogger\",\n            \"params\": {\n                    \"name\": \"testtube\",\n                    \"save_dir\": logdir,\n                }\n            },\n    }\n    os.makedirs(os.path.join(logdir, \"tensorboard\"), exist_ok=True)\n    default_logger_cfg = default_logger_cfgs[\"tensorboard\"]\n    if \"logger\" in lightning_config:\n        logger_cfg = lightning_config.logger\n    else:\n        logger_cfg = OmegaConf.create()\n    logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg)\n    return logger_cfg\n\ndef get_trainer_strategy(lightning_config):\n    default_strategy_dict = {\n        \"target\": \"pytorch_lightning.strategies.DDPShardedStrategy\"\n    }\n    if \"strategy\" in lightning_config:\n        strategy_cfg = lightning_config.strategy\n        return strategy_cfg\n    else:\n        strategy_cfg = OmegaConf.create()\n\n    strategy_cfg = OmegaConf.merge(default_strategy_dict, strategy_cfg)\n    return strategy_cfg\n\ndef load_checkpoints(model, model_cfg):\n    if check_config_attribute(model_cfg, \"pretrained_checkpoint\"):\n        pretrained_ckpt = model_cfg.pretrained_checkpoint\n        assert os.path.exists(pretrained_ckpt), \"Error: Pre-trained checkpoint NOT found at:%s\"%pretrained_ckpt\n        mainlogger.info(\">>> Load weights from pretrained checkpoint\")\n\n        pl_sd = torch.load(pretrained_ckpt, map_location=\"cpu\")\n        try:\n            if 'state_dict' in pl_sd.keys():\n                model.load_state_dict(pl_sd[\"state_dict\"], strict=True)\n                mainlogger.info(\">>> Loaded weights from pretrained checkpoint: %s\"%pretrained_ckpt)\n            else:\n                # deepspeed\n                new_pl_sd = OrderedDict()\n                for key in pl_sd['module'].keys():\n                    new_pl_sd[key[16:]]=pl_sd['module'][key]\n                model.load_state_dict(new_pl_sd, strict=True)\n        except:\n            model.load_state_dict(pl_sd)\n    else:\n        mainlogger.info(\">>> Start training from scratch\")\n\n    return model\n\ndef set_logger(logfile, name='mainlogger'):\n    logger = logging.getLogger(name)\n    logger.setLevel(logging.INFO)\n    fh = logging.FileHandler(logfile, mode='w')\n    fh.setLevel(logging.INFO)\n    ch = logging.StreamHandler()\n    ch.setLevel(logging.DEBUG)\n    fh.setFormatter(logging.Formatter(\"%(asctime)s-%(levelname)s: %(message)s\"))\n    ch.setFormatter(logging.Formatter(\"%(message)s\"))\n    logger.addHandler(fh)\n    logger.addHandler(ch)\n    return logger"
  },
  {
    "path": "ToonCrafter/prompts/512_interp/prompts.txt",
    "content": "walking man\nan anime scene\nan anime scene"
  },
  {
    "path": "ToonCrafter/requirements.txt",
    "content": "decord==0.6.0\neinops==0.3.0\nimageio==2.9.0\nnumpy==1.24.2\nomegaconf==2.1.1\nopencv_python\npandas==2.0.0\nPillow==9.5.0\npytorch_lightning==1.9.3\nPyYAML==6.0\nsetuptools==65.6.3\ntorch==2.0.0\ntorchvision\ntqdm==4.65.0\ntransformers==4.25.1\nmoviepy\nav\nxformers\ngradio\ntimm\nscikit-learn \nopen_clip_torch==2.22.0\nkornia"
  },
  {
    "path": "ToonCrafter/scripts/evaluation/ddp_wrapper.py",
    "content": "import datetime\nimport argparse, importlib\nfrom pytorch_lightning import seed_everything\n\nimport torch\nimport torch.distributed as dist\n\ndef setup_dist(local_rank):\n    if dist.is_initialized():\n        return\n    torch.cuda.set_device(local_rank)\n    torch.distributed.init_process_group('nccl', init_method='env://')\n\n\ndef get_dist_info():\n    if dist.is_available():\n        initialized = dist.is_initialized()\n    else:\n        initialized = False\n    if initialized:\n        rank = dist.get_rank()\n        world_size = dist.get_world_size()\n    else:\n        rank = 0\n        world_size = 1\n    return rank, world_size\n\n\nif __name__ == '__main__':\n    now = datetime.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--module\", type=str, help=\"module name\", default=\"inference\")\n    parser.add_argument(\"--local_rank\", type=int, nargs=\"?\", help=\"for ddp\", default=0)\n    args, unknown = parser.parse_known_args()\n    inference_api = importlib.import_module(args.module, package=None)\n\n    inference_parser = inference_api.get_parser()\n    inference_args, unknown = inference_parser.parse_known_args()\n\n    seed_everything(inference_args.seed)\n    setup_dist(args.local_rank)\n    torch.backends.cudnn.benchmark = True\n    rank, gpu_num = get_dist_info()\n\n    # inference_args.savedir = inference_args.savedir+str('_seed')+str(inference_args.seed)\n    print(\"@DynamiCrafter Inference [rank%d]: %s\"%(rank, now))\n    inference_api.run_inference(inference_args, gpu_num, rank)"
  },
  {
    "path": "ToonCrafter/scripts/evaluation/funcs.py",
    "content": "import os\nimport sys\nimport glob\nimport numpy as np\nfrom collections import OrderedDict\nfrom decord import VideoReader, cpu\nimport cv2\n\nimport torch\nimport torchvision\nsys.path.insert(1, os.path.join(sys.path[0], '..', '..'))\nfrom lvdm.models.samplers.ddim import DDIMSampler\nfrom lvdm.models.ddpm3d import LatentDiffusion\nfrom einops import rearrange\n\n\ndef batch_ddim_sampling(model: LatentDiffusion, cond, noise_shape, n_samples=1, ddim_steps=50, ddim_eta=1.0,\n                        cfg_scale=1.0, hs=None, temporal_cfg_scale=None, **kwargs):\n    ddim_sampler = DDIMSampler(model)\n    uncond_type = model.uncond_type\n    batch_size = noise_shape[0]\n    fs = cond[\"fs\"]\n    del cond[\"fs\"]\n    if noise_shape[-1] == 32:\n        timestep_spacing = \"uniform\"\n        guidance_rescale = 0.0\n    else:\n        timestep_spacing = \"uniform_trailing\"\n        guidance_rescale = 0.7\n    # construct unconditional guidance\n    if cfg_scale != 1.0:\n        if uncond_type == \"empty_seq\":\n            prompts = batch_size * [\"\"]\n            # prompts = N * T * [\"\"]  ## if is_imgbatch=True\n            uc_emb = model.get_learned_conditioning(prompts)\n        elif uncond_type == \"zero_embed\":\n            c_emb = cond[\"c_crossattn\"][0] if isinstance(cond, dict) else cond\n            uc_emb = torch.zeros_like(c_emb)\n\n        # process image embedding token\n        if hasattr(model, 'embedder'):\n            uc_img = torch.zeros(noise_shape[0], 3, 224, 224).to(model.device)\n            if uc_img.dtype != model.dtype:\n                uc_img = uc_img.to(model.dtype)\n            # img: b c h w >> b l c\n            uc_img = model.embedder(uc_img)\n            uc_img = model.image_proj_model(uc_img)\n            uc_emb = torch.cat([uc_emb, uc_img], dim=1)\n\n        if isinstance(cond, dict):\n            uc = {key: cond[key] for key in cond.keys()}\n            uc.update({'c_crossattn': [uc_emb]})\n        else:\n            uc = uc_emb\n    else:\n        uc = None\n\n    additional_decode_kwargs = {'ref_context': hs}\n    x_T = None\n    batch_variants = []\n\n    for _ in range(n_samples):\n        if ddim_sampler is not None:\n            kwargs.update({\"clean_cond\": True})\n            samples, _ = ddim_sampler.sample(S=ddim_steps,\n                                             conditioning=cond,\n                                             batch_size=noise_shape[0],\n                                             shape=noise_shape[1:],\n                                             verbose=False,\n                                             unconditional_guidance_scale=cfg_scale,\n                                             unconditional_conditioning=uc,\n                                             eta=ddim_eta,\n                                             temporal_length=noise_shape[2],\n                                             conditional_guidance_scale_temporal=temporal_cfg_scale,\n                                             x_T=x_T,\n                                             fs=fs,\n                                             precision=16 if model.dtype == torch.float16 else 32,\n                                             timestep_spacing=timestep_spacing,\n                                             guidance_rescale=guidance_rescale,\n                                             **kwargs\n                                             )\n        # reconstruct from latent to pixel space\n        batch_images = model.decode_first_stage(samples, **additional_decode_kwargs)\n\n        index = list(range(samples.shape[2]))\n        del index[1]\n        del index[-2]\n        samples = samples[:, :, index, :, :]\n        # reconstruct from latent to pixel space\n        batch_images_middle = model.decode_first_stage(samples, **additional_decode_kwargs)\n        batch_images[:, :, batch_images.shape[2] // 2 - 1:batch_images.shape[2] // 2 + 1] = batch_images_middle[:, :, batch_images.shape[2] // 2 - 2:batch_images.shape[2] // 2]\n\n        batch_variants.append(batch_images)\n    # batch, <samples>, c, t, h, w\n    batch_variants = torch.stack(batch_variants, dim=1)\n    return batch_variants\n\n\ndef get_filelist(data_dir, ext='*'):\n    file_list = glob.glob(os.path.join(data_dir, '*.%s' % ext))\n    file_list.sort()\n    return file_list\n\n\ndef get_dirlist(path):\n    list = []\n    if (os.path.exists(path)):\n        files = os.listdir(path)\n        for file in files:\n            m = os.path.join(path, file)\n            if (os.path.isdir(m)):\n                list.append(m)\n    list.sort()\n    return list\n\n\ndef load_model_checkpoint(model, ckpt):\n    def load_checkpoint(model, ckpt, full_strict):\n        try:\n            state_dict = torch.load(ckpt, map_location=\"cpu\")\n        except BaseException:\n            # Read fp16 version\n            from safetensors.torch import load_file\n            state_dict = load_file(ckpt)\n        if \"state_dict\" in list(state_dict.keys()):\n            state_dict = state_dict[\"state_dict\"]\n        try:\n            model.load_state_dict(state_dict, strict=full_strict)\n        except BaseException:\n            # rename the keys for 256x256 model\n            new_pl_sd = OrderedDict()\n            for k, v in state_dict.items():\n                new_pl_sd[k] = v\n\n            for k in list(new_pl_sd.keys()):\n                if \"framestride_embed\" in k:\n                    new_key = k.replace(\"framestride_embed\", \"fps_embedding\")\n                    new_pl_sd[new_key] = new_pl_sd[k]\n                    del new_pl_sd[k]\n            model.load_state_dict(new_pl_sd, strict=full_strict)\n        # No module key in state_dict\n        # else:\n        #     # deepspeed\n        #     new_pl_sd = OrderedDict()\n        #     for key in state_dict['module'].keys():\n        #         new_pl_sd[key[16:]] = state_dict['module'][key]\n        #     model.load_state_dict(new_pl_sd, strict=full_strict)\n\n        return model\n    load_checkpoint(model, ckpt, full_strict=True)\n    print('>>> model checkpoint loaded.')\n    return model\n\n\ndef load_prompts(prompt_file):\n    f = open(prompt_file, 'r')\n    prompt_list = []\n    for idx, line in enumerate(f.readlines()):\n        l = line.strip()\n        if len(l) != 0:\n            prompt_list.append(l)\n        f.close()\n    return prompt_list\n\n\ndef load_video_batch(filepath_list, frame_stride, video_size=(256, 256), video_frames=16):\n    '''\n    Notice about some special cases:\n    1. video_frames=-1 means to take all the frames (with fs=1)\n    2. when the total video frames is less than required, padding strategy will be used (repeated last frame)\n    '''\n    fps_list = []\n    batch_tensor = []\n    assert frame_stride > 0, \"valid frame stride should be a positive interge!\"\n    for filepath in filepath_list:\n        padding_num = 0\n        vidreader = VideoReader(filepath, ctx=cpu(0), width=video_size[1], height=video_size[0])\n        fps = vidreader.get_avg_fps()\n        total_frames = len(vidreader)\n        max_valid_frames = (total_frames - 1) // frame_stride + 1\n        if video_frames < 0:\n            # all frames are collected: fs=1 is a must\n            required_frames = total_frames\n            frame_stride = 1\n        else:\n            required_frames = video_frames\n        query_frames = min(required_frames, max_valid_frames)\n        frame_indices = [frame_stride * i for i in range(query_frames)]\n\n        # [t,h,w,c] -> [c,t,h,w]\n        frames = vidreader.get_batch(frame_indices)\n        frame_tensor = torch.tensor(frames.asnumpy()).permute(3, 0, 1, 2).float()\n        frame_tensor = (frame_tensor / 255. - 0.5) * 2\n        if max_valid_frames < required_frames:\n            padding_num = required_frames - max_valid_frames\n            frame_tensor = torch.cat([frame_tensor, *([frame_tensor[:, -1:, :, :]] * padding_num)], dim=1)\n            print(f'{os.path.split(filepath)[1]} is not long enough: {padding_num} frames padded.')\n        batch_tensor.append(frame_tensor)\n        sample_fps = int(fps / frame_stride)\n        fps_list.append(sample_fps)\n\n    return torch.stack(batch_tensor, dim=0)\n\n\nfrom PIL import Image\n\n\ndef load_image_batch(filepath_list, image_size=(256, 256)):\n    batch_tensor = []\n    for filepath in filepath_list:\n        _, filename = os.path.split(filepath)\n        _, ext = os.path.splitext(filename)\n        if ext == '.mp4':\n            vidreader = VideoReader(filepath, ctx=cpu(0), width=image_size[1], height=image_size[0])\n            frame = vidreader.get_batch([0])\n            img_tensor = torch.tensor(frame.asnumpy()).squeeze(0).permute(2, 0, 1).float()\n        elif ext == '.png' or ext == '.jpg':\n            img = Image.open(filepath).convert(\"RGB\")\n            rgb_img = np.array(img, np.float32)\n            # bgr_img = cv2.imread(filepath, cv2.IMREAD_COLOR)\n            # bgr_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2RGB)\n            rgb_img = cv2.resize(rgb_img, (image_size[1], image_size[0]), interpolation=cv2.INTER_LINEAR)\n            img_tensor = torch.from_numpy(rgb_img).permute(2, 0, 1).float()\n        else:\n            print(f'ERROR: <{ext}> image loading only support format: [mp4], [png], [jpg]')\n            raise NotImplementedError\n        img_tensor = (img_tensor / 255. - 0.5) * 2\n        batch_tensor.append(img_tensor)\n    return torch.stack(batch_tensor, dim=0)\n\n\ndef save_videos(batch_tensors, savedir, filenames, fps=10):\n    # b,samples,c,t,h,w\n    n_samples = batch_tensors.shape[1]\n    for idx, vid_tensor in enumerate(batch_tensors):\n        video = vid_tensor.detach().cpu()\n        video = torch.clamp(video.float(), -1., 1.)\n        video = video.permute(2, 0, 1, 3, 4)  # t,n,c,h,w\n        frame_grids = [torchvision.utils.make_grid(framesheet, nrow=int(n_samples)) for framesheet in video]  # [3, 1*h, n*w]\n        grid = torch.stack(frame_grids, dim=0)  # stack in temporal dim [t, 3, n*h, w]\n        grid = (grid + 1.0) / 2.0\n        grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1)\n        savepath = os.path.join(savedir, f\"{filenames[idx]}.mp4\")\n        torchvision.io.write_video(savepath, grid, fps=fps, video_codec='h264', options={'crf': '10'})\n\n\ndef get_latent_z(model, videos):\n    b, c, t, h, w = videos.shape\n    x = rearrange(videos, 'b c t h w -> (b t) c h w')\n    z = model.encode_first_stage(x)\n    z = rearrange(z, '(b t) c h w -> b c t h w', b=b, t=t)\n    return z\n"
  },
  {
    "path": "ToonCrafter/scripts/evaluation/inference.py",
    "content": "import argparse, os, sys, glob\nimport datetime, time\nfrom omegaconf import OmegaConf\nfrom tqdm import tqdm\nfrom einops import rearrange, repeat\nfrom collections import OrderedDict\n\nimport torch\nimport torchvision\nimport torchvision.transforms as transforms\nfrom pytorch_lightning import seed_everything\nfrom PIL import Image\nsys.path.insert(1, os.path.join(sys.path[0], '..', '..'))\nfrom lvdm.models.samplers.ddim import DDIMSampler\nfrom lvdm.models.samplers.ddim_multiplecond import DDIMSampler as DDIMSampler_multicond\nfrom ToonCrafter.utils.utils import instantiate_from_config\n\n\ndef get_filelist(data_dir, postfixes):\n    patterns = [os.path.join(data_dir, f\"*.{postfix}\") for postfix in postfixes]\n    file_list = []\n    for pattern in patterns:\n        file_list.extend(glob.glob(pattern))\n    file_list.sort()\n    return file_list\n\ndef load_model_checkpoint(model, ckpt):\n    state_dict = torch.load(ckpt, map_location=\"cpu\")\n    if \"state_dict\" in list(state_dict.keys()):\n        state_dict = state_dict[\"state_dict\"]\n        try:\n            model.load_state_dict(state_dict, strict=True)\n        except:\n            ## rename the keys for 256x256 model\n            new_pl_sd = OrderedDict()\n            for k,v in state_dict.items():\n                new_pl_sd[k] = v\n\n            for k in list(new_pl_sd.keys()):\n                if \"framestride_embed\" in k:\n                    new_key = k.replace(\"framestride_embed\", \"fps_embedding\")\n                    new_pl_sd[new_key] = new_pl_sd[k]\n                    del new_pl_sd[k]\n            model.load_state_dict(new_pl_sd, strict=True)\n    else:\n        # deepspeed\n        new_pl_sd = OrderedDict()\n        for key in state_dict['module'].keys():\n            new_pl_sd[key[16:]]=state_dict['module'][key]\n        model.load_state_dict(new_pl_sd)\n    print('>>> model checkpoint loaded.')\n    return model\n\ndef load_prompts(prompt_file):\n    f = open(prompt_file, 'r')\n    prompt_list = []\n    for idx, line in enumerate(f.readlines()):\n        l = line.strip()\n        if len(l) != 0:\n            prompt_list.append(l)\n        f.close()\n    return prompt_list\n\ndef load_data_prompts(data_dir, video_size=(256,256), video_frames=16, interp=False):\n    transform = transforms.Compose([\n        transforms.Resize(min(video_size)),\n        transforms.CenterCrop(video_size),\n        transforms.ToTensor(),\n        transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))])\n    ## load prompts\n    prompt_file = get_filelist(data_dir, ['txt'])\n    assert len(prompt_file) > 0, \"Error: found NO prompt file!\"\n    ###### default prompt\n    default_idx = 0\n    default_idx = min(default_idx, len(prompt_file)-1)\n    if len(prompt_file) > 1:\n        print(f\"Warning: multiple prompt files exist. The one {os.path.split(prompt_file[default_idx])[1]} is used.\")\n    ## only use the first one (sorted by name) if multiple exist\n    \n    ## load video\n    file_list = get_filelist(data_dir, ['jpg', 'png', 'jpeg', 'JPEG', 'PNG'])\n    # assert len(file_list) == n_samples, \"Error: data and prompts are NOT paired!\"\n    data_list = []\n    filename_list = []\n    prompt_list = load_prompts(prompt_file[default_idx])\n    n_samples = len(prompt_list)\n    for idx in range(n_samples):\n        if interp:\n            image1 = Image.open(file_list[2*idx]).convert('RGB')\n            image_tensor1 = transform(image1).unsqueeze(1) # [c,1,h,w]\n            image2 = Image.open(file_list[2*idx+1]).convert('RGB')\n            image_tensor2 = transform(image2).unsqueeze(1) # [c,1,h,w]\n            frame_tensor1 = repeat(image_tensor1, 'c t h w -> c (repeat t) h w', repeat=video_frames//2)\n            frame_tensor2 = repeat(image_tensor2, 'c t h w -> c (repeat t) h w', repeat=video_frames//2)\n            frame_tensor = torch.cat([frame_tensor1, frame_tensor2], dim=1)\n            _, filename = os.path.split(file_list[idx*2])\n        else:\n            image = Image.open(file_list[idx]).convert('RGB')\n            image_tensor = transform(image).unsqueeze(1) # [c,1,h,w]\n            frame_tensor = repeat(image_tensor, 'c t h w -> c (repeat t) h w', repeat=video_frames)\n            _, filename = os.path.split(file_list[idx])\n\n        data_list.append(frame_tensor)\n        filename_list.append(filename)\n        \n    return filename_list, data_list, prompt_list\n\n\ndef save_results(prompt, samples, filename, fakedir, fps=8, loop=False):\n    filename = filename.split('.')[0]+'.mp4'\n    prompt = prompt[0] if isinstance(prompt, list) else prompt\n\n    ## save video\n    videos = [samples]\n    savedirs = [fakedir]\n    for idx, video in enumerate(videos):\n        if video is None:\n            continue\n        # b,c,t,h,w\n        video = video.detach().cpu()\n        video = torch.clamp(video.float(), -1., 1.)\n        n = video.shape[0]\n        video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w\n        if loop:\n            video = video[:-1,...]\n        \n        frame_grids = [torchvision.utils.make_grid(framesheet, nrow=int(n), padding=0) for framesheet in video] #[3, 1*h, n*w]\n        grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [t, 3, h, n*w]\n        grid = (grid + 1.0) / 2.0\n        grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1)\n        path = os.path.join(savedirs[idx], filename)\n        torchvision.io.write_video(path, grid, fps=fps, video_codec='h264', options={'crf': '10'}) ## crf indicates the quality\n\n\ndef save_results_seperate(prompt, samples, filename, fakedir, fps=10, loop=False):\n    prompt = prompt[0] if isinstance(prompt, list) else prompt\n\n    ## save video\n    videos = [samples]\n    savedirs = [fakedir]\n    for idx, video in enumerate(videos):\n        if video is None:\n            continue\n        # b,c,t,h,w\n        video = video.detach().cpu()\n        if loop: # remove the last frame\n            video = video[:,:,:-1,...]\n        video = torch.clamp(video.float(), -1., 1.)\n        n = video.shape[0]\n        for i in range(n):\n            grid = video[i,...]\n            grid = (grid + 1.0) / 2.0\n            grid = (grid * 255).to(torch.uint8).permute(1, 2, 3, 0) #thwc\n            path = os.path.join(savedirs[idx].replace('samples', 'samples_separate'), f'{filename.split(\".\")[0]}_sample{i}.mp4')\n            torchvision.io.write_video(path, grid, fps=fps, video_codec='h264', options={'crf': '10'})\n\ndef get_latent_z(model, videos):\n    b, c, t, h, w = videos.shape\n    x = rearrange(videos, 'b c t h w -> (b t) c h w')\n    z = model.encode_first_stage(x)\n    z = rearrange(z, '(b t) c h w -> b c t h w', b=b, t=t)\n    return z\n\ndef get_latent_z_with_hidden_states(model, videos):\n    b, c, t, h, w = videos.shape\n    x = rearrange(videos, 'b c t h w -> (b t) c h w')\n    encoder_posterior, hidden_states = model.first_stage_model.encode(x, return_hidden_states=True)\n\n    hidden_states_first_last = []\n    ### use only the first and last hidden states\n    for hid in hidden_states:\n        hid = rearrange(hid, '(b t) c h w -> b c t h w', t=t)\n        hid_new = torch.cat([hid[:, :, 0:1], hid[:, :, -1:]], dim=2)\n        hidden_states_first_last.append(hid_new)\n\n    z = model.get_first_stage_encoding(encoder_posterior).detach()\n    z = rearrange(z, '(b t) c h w -> b c t h w', b=b, t=t)\n    return z, hidden_states_first_last\n\ndef image_guided_synthesis(model, prompts, videos, noise_shape, n_samples=1, ddim_steps=50, ddim_eta=1., \\\n                        unconditional_guidance_scale=1.0, cfg_img=None, fs=None, text_input=False, multiple_cond_cfg=False, loop=False, interp=False, timestep_spacing='uniform', guidance_rescale=0.0, **kwargs):\n    ddim_sampler = DDIMSampler(model) if not multiple_cond_cfg else DDIMSampler_multicond(model)\n    batch_size = noise_shape[0]\n    fs = torch.tensor([fs] * batch_size, dtype=torch.long, device=model.device)\n\n    if not text_input:\n        prompts = [\"\"]*batch_size\n\n    img = videos[:,:,0] #bchw\n    img_emb = model.embedder(img) ## blc\n    img_emb = model.image_proj_model(img_emb)\n\n    cond_emb = model.get_learned_conditioning(prompts)\n    cond = {\"c_crossattn\": [torch.cat([cond_emb,img_emb], dim=1)]}\n    if model.model.conditioning_key == 'hybrid':\n        z, hs = get_latent_z_with_hidden_states(model, videos) # b c t h w\n        if loop or interp:\n            img_cat_cond = torch.zeros_like(z)\n            img_cat_cond[:,:,0,:,:] = z[:,:,0,:,:]\n            img_cat_cond[:,:,-1,:,:] = z[:,:,-1,:,:]\n        else:\n            img_cat_cond = z[:,:,:1,:,:]\n            img_cat_cond = repeat(img_cat_cond, 'b c t h w -> b c (repeat t) h w', repeat=z.shape[2])\n        cond[\"c_concat\"] = [img_cat_cond] # b c 1 h w\n    \n    if unconditional_guidance_scale != 1.0:\n        if model.uncond_type == \"empty_seq\":\n            prompts = batch_size * [\"\"]\n            uc_emb = model.get_learned_conditioning(prompts)\n        elif model.uncond_type == \"zero_embed\":\n            uc_emb = torch.zeros_like(cond_emb)\n        uc_img_emb = model.embedder(torch.zeros_like(img)) ## b l c\n        uc_img_emb = model.image_proj_model(uc_img_emb)\n        uc = {\"c_crossattn\": [torch.cat([uc_emb,uc_img_emb],dim=1)]}\n        if model.model.conditioning_key == 'hybrid':\n            uc[\"c_concat\"] = [img_cat_cond]\n    else:\n        uc = None\n\n    additional_decode_kwargs = {'ref_context': hs}\n\n    ## we need one more unconditioning image=yes, text=\"\"\n    if multiple_cond_cfg and cfg_img != 1.0:\n        uc_2 = {\"c_crossattn\": [torch.cat([uc_emb,img_emb],dim=1)]}\n        if model.model.conditioning_key == 'hybrid':\n            uc_2[\"c_concat\"] = [img_cat_cond]\n        kwargs.update({\"unconditional_conditioning_img_nonetext\": uc_2})\n    else:\n        kwargs.update({\"unconditional_conditioning_img_nonetext\": None})\n\n    z0 = None\n    cond_mask = None\n\n    batch_variants = []\n    for _ in range(n_samples):\n\n        if z0 is not None:\n            cond_z0 = z0.clone()\n            kwargs.update({\"clean_cond\": True})\n        else:\n            cond_z0 = None\n        if ddim_sampler is not None:\n\n            samples, _ = ddim_sampler.sample(S=ddim_steps,\n                                            conditioning=cond,\n                                            batch_size=batch_size,\n                                            shape=noise_shape[1:],\n                                            verbose=False,\n                                            unconditional_guidance_scale=unconditional_guidance_scale,\n                                            unconditional_conditioning=uc,\n                                            eta=ddim_eta,\n                                            cfg_img=cfg_img, \n                                            mask=cond_mask,\n                                            x0=cond_z0,\n                                            fs=fs,\n                                            timestep_spacing=timestep_spacing,\n                                            guidance_rescale=guidance_rescale,\n                                            **kwargs\n                                            )\n        ## reconstruct from latent to pixel space\n        batch_images = model.decode_first_stage(samples, **additional_decode_kwargs)\n\n        index = list(range(samples.shape[2]))\n        del index[1]\n        del index[-2]\n        samples = samples[:, :, index, :, :]\n        ## reconstruct from latent to pixel space\n        batch_images_middle = model.decode_first_stage(samples, **additional_decode_kwargs)\n        batch_images[:, :, batch_images.shape[2] // 2 - 1:batch_images.shape[2] // 2 + 1] = batch_images_middle[:, :, batch_images.shape[2] // 2 - 2:batch_images.shape[2] // 2]\n\n        batch_variants.append(batch_images)\n    ## variants, batch, c, t, h, w\n    batch_variants = torch.stack(batch_variants)\n    return batch_variants.permute(1, 0, 2, 3, 4, 5)\n\n\ndef run_inference(args, gpu_num, gpu_no):\n    ## model config\n    config = OmegaConf.load(args.config)\n    model_config = config.pop(\"model\", OmegaConf.create())\n    \n    ## set use_checkpoint as False as when using deepspeed, it encounters an error \"deepspeed backend not set\"\n    model_config['params']['unet_config']['params']['use_checkpoint'] = False\n    model = instantiate_from_config(model_config)\n    model = model.cuda(gpu_no)\n    model.perframe_ae = args.perframe_ae\n    assert os.path.exists(args.ckpt_path), \"Error: checkpoint Not Found!\"\n    model = load_model_checkpoint(model, args.ckpt_path)\n    model.eval()\n\n    ## run over data\n    assert (args.height % 16 == 0) and (args.width % 16 == 0), \"Error: image size [h,w] should be multiples of 16!\"\n    assert args.bs == 1, \"Current implementation only support [batch size = 1]!\"\n    ## latent noise shape\n    h, w = args.height // 8, args.width // 8\n    channels = model.model.diffusion_model.out_channels\n    n_frames = args.video_length\n    print(f'Inference with {n_frames} frames')\n    noise_shape = [args.bs, channels, n_frames, h, w]\n\n    fakedir = os.path.join(args.savedir, \"samples\")\n    fakedir_separate = os.path.join(args.savedir, \"samples_separate\")\n\n    # os.makedirs(fakedir, exist_ok=True)\n    os.makedirs(fakedir_separate, exist_ok=True)\n\n    ## prompt file setting\n    assert os.path.exists(args.prompt_dir), \"Error: prompt file Not Found!\"\n    filename_list, data_list, prompt_list = load_data_prompts(args.prompt_dir, video_size=(args.height, args.width), video_frames=n_frames, interp=args.interp)\n    num_samples = len(prompt_list)\n    samples_split = num_samples // gpu_num\n    print('Prompts testing [rank:%d] %d/%d samples loaded.'%(gpu_no, samples_split, num_samples))\n    #indices = random.choices(list(range(0, num_samples)), k=samples_per_device)\n    indices = list(range(samples_split*gpu_no, samples_split*(gpu_no+1)))\n    prompt_list_rank = [prompt_list[i] for i in indices]\n    data_list_rank = [data_list[i] for i in indices]\n    filename_list_rank = [filename_list[i] for i in indices]\n\n    start = time.time()\n    with torch.no_grad(), torch.cuda.amp.autocast():\n        for idx, indice in tqdm(enumerate(range(0, len(prompt_list_rank), args.bs)), desc='Sample Batch'):\n            prompts = prompt_list_rank[indice:indice+args.bs]\n            videos = data_list_rank[indice:indice+args.bs]\n            filenames = filename_list_rank[indice:indice+args.bs]\n            if isinstance(videos, list):\n                videos = torch.stack(videos, dim=0).to(\"cuda\")\n            else:\n                videos = videos.unsqueeze(0).to(\"cuda\")\n\n            batch_samples = image_guided_synthesis(model, prompts, videos, noise_shape, args.n_samples, args.ddim_steps, args.ddim_eta, \\\n                                args.unconditional_guidance_scale, args.cfg_img, args.frame_stride, args.text_input, args.multiple_cond_cfg, args.loop, args.interp, args.timestep_spacing, args.guidance_rescale)\n\n            ## save each example individually\n            for nn, samples in enumerate(batch_samples):\n                ## samples : [n_samples,c,t,h,w]\n                prompt = prompts[nn]\n                filename = filenames[nn]\n                # save_results(prompt, samples, filename, fakedir, fps=8, loop=args.loop)\n                save_results_seperate(prompt, samples, filename, fakedir, fps=8, loop=args.loop)\n\n    print(f\"Saved in {args.savedir}. Time used: {(time.time() - start):.2f} seconds\")\n\n\ndef get_parser():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--savedir\", type=str, default=None, help=\"results saving path\")\n    parser.add_argument(\"--ckpt_path\", type=str, default=None, help=\"checkpoint path\")\n    parser.add_argument(\"--config\", type=str, help=\"config (yaml) path\")\n    parser.add_argument(\"--prompt_dir\", type=str, default=None, help=\"a data dir containing videos and prompts\")\n    parser.add_argument(\"--n_samples\", type=int, default=1, help=\"num of samples per prompt\",)\n    parser.add_argument(\"--ddim_steps\", type=int, default=50, help=\"steps of ddim if positive, otherwise use DDPM\",)\n    parser.add_argument(\"--ddim_eta\", type=float, default=1.0, help=\"eta for ddim sampling (0.0 yields deterministic sampling)\",)\n    parser.add_argument(\"--bs\", type=int, default=1, help=\"batch size for inference, should be one\")\n    parser.add_argument(\"--height\", type=int, default=512, help=\"image height, in pixel space\")\n    parser.add_argument(\"--width\", type=int, default=512, help=\"image width, in pixel space\")\n    parser.add_argument(\"--frame_stride\", type=int, default=3, help=\"frame stride control for 256 model (larger->larger motion), FPS control for 512 or 1024 model (smaller->larger motion)\")\n    parser.add_argument(\"--unconditional_guidance_scale\", type=float, default=1.0, help=\"prompt classifier-free guidance\")\n    parser.add_argument(\"--seed\", type=int, default=123, help=\"seed for seed_everything\")\n    parser.add_argument(\"--video_length\", type=int, default=16, help=\"inference video length\")\n    parser.add_argument(\"--negative_prompt\", action='store_true', default=False, help=\"negative prompt\")\n    parser.add_argument(\"--text_input\", action='store_true', default=False, help=\"input text to I2V model or not\")\n    parser.add_argument(\"--multiple_cond_cfg\", action='store_true', default=False, help=\"use multi-condition cfg or not\")\n    parser.add_argument(\"--cfg_img\", type=float, default=None, help=\"guidance scale for image conditioning\")\n    parser.add_argument(\"--timestep_spacing\", type=str, default=\"uniform\", help=\"The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.\")\n    parser.add_argument(\"--guidance_rescale\", type=float, default=0.0, help=\"guidance rescale in [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891)\")\n    parser.add_argument(\"--perframe_ae\", action='store_true', default=False, help=\"if we use per-frame AE decoding, set it to True to save GPU memory, especially for the model of 576x1024\")\n\n    ## currently not support looping video and generative frame interpolation\n    parser.add_argument(\"--loop\", action='store_true', default=False, help=\"generate looping videos or not\")\n    parser.add_argument(\"--interp\", action='store_true', default=False, help=\"generate generative frame interpolation or not\")\n    return parser\n\n\nif __name__ == '__main__':\n    now = datetime.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n    print(\"@DynamiCrafter cond-Inference: %s\"%now)\n    parser = get_parser()\n    args = parser.parse_args()\n    \n    seed_everything(args.seed)\n    rank, gpu_num = 0, 1\n    run_inference(args, gpu_num, rank)"
  },
  {
    "path": "ToonCrafter/scripts/gradio/i2v_test.py",
    "content": "import os\nimport time\nfrom omegaconf import OmegaConf\nimport torch\nfrom scripts.evaluation.funcs import load_model_checkpoint, save_videos, batch_ddim_sampling, get_latent_z\nfrom ToonCrafter.utils.utils import instantiate_from_config\nfrom huggingface_hub import hf_hub_download\nfrom einops import repeat\nimport torchvision.transforms as transforms\nfrom pytorch_lightning import seed_everything\n\n\nclass Image2Video():\n    def __init__(self,result_dir='./tmp/',gpu_num=1,resolution='256_256') -> None:\n        self.resolution = (int(resolution.split('_')[0]), int(resolution.split('_')[1])) #hw\n        self.download_model()\n        \n        self.result_dir = result_dir\n        if not os.path.exists(self.result_dir):\n            os.mkdir(self.result_dir)\n        ckpt_path='checkpoints/dynamicrafter_'+resolution.split('_')[1]+'_v1/model.ckpt'\n        config_file='configs/inference_'+resolution.split('_')[1]+'_v1.0.yaml'\n        config = OmegaConf.load(config_file)\n        model_config = config.pop(\"model\", OmegaConf.create())\n        model_config['params']['unet_config']['params']['use_checkpoint']=False   \n        model_list = []\n        for gpu_id in range(gpu_num):\n            model = instantiate_from_config(model_config)\n            # model = model.cuda(gpu_id)\n            assert os.path.exists(ckpt_path), \"Error: checkpoint Not Found!\"\n            model = load_model_checkpoint(model, ckpt_path)\n            model.eval()\n            model_list.append(model)\n        self.model_list = model_list\n        self.save_fps = 8\n\n    def get_image(self, image, prompt, steps=50, cfg_scale=7.5, eta=1.0, fs=3, seed=123):\n        seed_everything(seed)\n        transform = transforms.Compose([\n            transforms.Resize(min(self.resolution)),\n            transforms.CenterCrop(self.resolution),\n            ])\n        torch.cuda.empty_cache()\n        print('start:', prompt, time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())))\n        start = time.time()\n        gpu_id=0\n        if steps > 60:\n            steps = 60 \n        model = self.model_list[gpu_id]\n        model = model.cuda()\n        batch_size=1\n        channels = model.model.diffusion_model.out_channels\n        frames = model.temporal_length\n        h, w = self.resolution[0] // 8, self.resolution[1] // 8\n        noise_shape = [batch_size, channels, frames, h, w]\n\n        # text cond\n        with torch.no_grad(), torch.cuda.amp.autocast():\n            text_emb = model.get_learned_conditioning([prompt])\n\n            # img cond\n            img_tensor = torch.from_numpy(image).permute(2, 0, 1).float().to(model.device)\n            img_tensor = (img_tensor / 255. - 0.5) * 2\n\n            image_tensor_resized = transform(img_tensor) #3,h,w\n            videos = image_tensor_resized.unsqueeze(0) # bchw\n            \n            z = get_latent_z(model, videos.unsqueeze(2)) #bc,1,hw\n            \n            img_tensor_repeat = repeat(z, 'b c t h w -> b c (repeat t) h w', repeat=frames)\n\n            cond_images = model.embedder(img_tensor.unsqueeze(0)) ## blc\n            img_emb = model.image_proj_model(cond_images)\n\n            imtext_cond = torch.cat([text_emb, img_emb], dim=1)\n\n            fs = torch.tensor([fs], dtype=torch.long, device=model.device)\n            cond = {\"c_crossattn\": [imtext_cond], \"fs\": fs, \"c_concat\": [img_tensor_repeat]}\n            \n            ## inference\n            batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale)\n            ## b,samples,c,t,h,w\n            prompt_str = prompt.replace(\"/\", \"_slash_\") if \"/\" in prompt else prompt\n            prompt_str = prompt_str.replace(\" \", \"_\") if \" \" in prompt else prompt_str\n            prompt_str=prompt_str[:40]\n            if len(prompt_str) == 0:\n                prompt_str = 'empty_prompt'\n\n        save_videos(batch_samples, self.result_dir, filenames=[prompt_str], fps=self.save_fps)\n        print(f\"Saved in {prompt_str}. Time used: {(time.time() - start):.2f} seconds\")\n        model = model.cpu()\n        return os.path.join(self.result_dir, f\"{prompt_str}.mp4\")\n    \n    def download_model(self):\n        REPO_ID = 'Doubiiu/DynamiCrafter_'+str(self.resolution[1]) if self.resolution[1]!=256 else 'Doubiiu/DynamiCrafter'\n        filename_list = ['model.ckpt']\n        if not os.path.exists('./checkpoints/dynamicrafter_'+str(self.resolution[1])+'_v1/'):\n            os.makedirs('./checkpoints/dynamicrafter_'+str(self.resolution[1])+'_v1/')\n        for filename in filename_list:\n            local_file = os.path.join('./checkpoints/dynamicrafter_'+str(self.resolution[1])+'_v1/', filename)\n            if not os.path.exists(local_file):\n                hf_hub_download(repo_id=REPO_ID, filename=filename, local_dir='./checkpoints/dynamicrafter_'+str(self.resolution[1])+'_v1/', local_dir_use_symlinks=False)\n    \nif __name__ == '__main__':\n    i2v = Image2Video()\n    video_path = i2v.get_image('prompts/art.png','man fishing in a boat at sunset')\n    print('done', video_path)"
  },
  {
    "path": "ToonCrafter/scripts/gradio/i2v_test_application.py",
    "content": "import os\nimport time\nfrom omegaconf import OmegaConf\nimport torch\nfrom scripts.evaluation.funcs import load_model_checkpoint, save_videos, batch_ddim_sampling, get_latent_z\nfrom ToonCrafter.utils.utils import instantiate_from_config\nfrom huggingface_hub import hf_hub_download\nfrom einops import repeat\nimport torchvision.transforms as transforms\nfrom pytorch_lightning import seed_everything\nfrom einops import rearrange\n\nclass Image2Video():\n    def __init__(self,result_dir='./tmp/',gpu_num=1,resolution='256_256') -> None:\n        self.resolution = (int(resolution.split('_')[0]), int(resolution.split('_')[1])) #hw\n        self.download_model()\n        \n        self.result_dir = result_dir\n        if not os.path.exists(self.result_dir):\n            os.mkdir(self.result_dir)\n        ckpt_path='checkpoints/tooncrafter_'+resolution.split('_')[1]+'_interp_v1/model.ckpt'\n        config_file='configs/inference_'+resolution.split('_')[1]+'_v1.0.yaml'\n        config = OmegaConf.load(config_file)\n        model_config = config.pop(\"model\", OmegaConf.create())\n        model_config['params']['unet_config']['params']['use_checkpoint']=False   \n        model_list = []\n        for gpu_id in range(gpu_num):\n            model = instantiate_from_config(model_config)\n            # model = model.cuda(gpu_id)\n            print(ckpt_path)\n            assert os.path.exists(ckpt_path), \"Error: checkpoint Not Found!\"\n            model = load_model_checkpoint(model, ckpt_path)\n            model.eval()\n            model_list.append(model)\n        self.model_list = model_list\n        self.save_fps = 8\n\n    def get_image(self, image, prompt, steps=50, cfg_scale=7.5, eta=1.0, fs=3, seed=123, image2=None):\n        seed_everything(seed)\n        transform = transforms.Compose([\n            transforms.Resize(min(self.resolution)),\n            transforms.CenterCrop(self.resolution),\n            ])\n        torch.cuda.empty_cache()\n        print('start:', prompt, time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())))\n        start = time.time()\n        gpu_id=0\n        if steps > 60:\n            steps = 60 \n        model = self.model_list[gpu_id]\n        model = model.cuda()\n        batch_size=1\n        channels = model.model.diffusion_model.out_channels\n        frames = model.temporal_length\n        h, w = self.resolution[0] // 8, self.resolution[1] // 8\n        noise_shape = [batch_size, channels, frames, h, w]\n\n        # text cond\n        with torch.no_grad(), torch.cuda.amp.autocast():\n            text_emb = model.get_learned_conditioning([prompt])\n\n            # img cond\n            img_tensor = torch.from_numpy(image).permute(2, 0, 1).float().to(model.device)\n            img_tensor = (img_tensor / 255. - 0.5) * 2\n\n            image_tensor_resized = transform(img_tensor) #3,h,w\n            videos = image_tensor_resized.unsqueeze(0).unsqueeze(2) # bc1hw\n            \n            # z = get_latent_z(model, videos) #bc,1,hw\n            videos = repeat(videos, 'b c t h w -> b c (repeat t) h w', repeat=frames//2)\n            \n            \n\n\n            img_tensor2 = torch.from_numpy(image2).permute(2, 0, 1).float().to(model.device)\n            img_tensor2 = (img_tensor2 / 255. - 0.5) * 2\n            image_tensor_resized2 = transform(img_tensor2) #3,h,w\n            videos2 = image_tensor_resized2.unsqueeze(0).unsqueeze(2) # bchw\n            videos2 = repeat(videos2, 'b c t h w -> b c (repeat t) h w', repeat=frames//2)\n            \n            \n            videos = torch.cat([videos, videos2], dim=2)\n            z, hs = self.get_latent_z_with_hidden_states(model, videos)\n\n            img_tensor_repeat = torch.zeros_like(z)\n\n            img_tensor_repeat[:,:,:1,:,:] = z[:,:,:1,:,:]\n            img_tensor_repeat[:,:,-1:,:,:] = z[:,:,-1:,:,:]\n\n\n            cond_images = model.embedder(img_tensor.unsqueeze(0)) ## blc\n            img_emb = model.image_proj_model(cond_images)\n\n            imtext_cond = torch.cat([text_emb, img_emb], dim=1)\n\n            fs = torch.tensor([fs], dtype=torch.long, device=model.device)\n            cond = {\"c_crossattn\": [imtext_cond], \"fs\": fs, \"c_concat\": [img_tensor_repeat]}\n            \n            ## inference\n            batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale, hs=hs)\n\n            ## remove the last frame\n            if image2 is None:\n                batch_samples = batch_samples[:,:,:,:-1,...]\n            ## b,samples,c,t,h,w\n            prompt_str = prompt.replace(\"/\", \"_slash_\") if \"/\" in prompt else prompt\n            prompt_str = prompt_str.replace(\" \", \"_\") if \" \" in prompt else prompt_str\n            prompt_str=prompt_str[:40]\n            if len(prompt_str) == 0:\n                prompt_str = 'empty_prompt'\n\n        save_videos(batch_samples, self.result_dir, filenames=[prompt_str], fps=self.save_fps)\n        print(f\"Saved in {prompt_str}. Time used: {(time.time() - start):.2f} seconds\")\n        model = model.cpu()\n        return os.path.join(self.result_dir, f\"{prompt_str}.mp4\")\n    \n    def download_model(self):\n        REPO_ID = 'Doubiiu/ToonCrafter'\n        filename_list = ['model.ckpt']\n        if not os.path.exists('./checkpoints/tooncrafter_'+str(self.resolution[1])+'_interp_v1/'):\n            os.makedirs('./checkpoints/tooncrafter_'+str(self.resolution[1])+'_interp_v1/')\n        for filename in filename_list:\n            local_file = os.path.join('./checkpoints/tooncrafter_'+str(self.resolution[1])+'_interp_v1/', filename)\n            if not os.path.exists(local_file):\n                hf_hub_download(repo_id=REPO_ID, filename=filename, local_dir='./checkpoints/tooncrafter_'+str(self.resolution[1])+'_interp_v1/', local_dir_use_symlinks=False)\n    \n    def get_latent_z_with_hidden_states(self, model, videos):\n        b, c, t, h, w = videos.shape\n        x = rearrange(videos, 'b c t h w -> (b t) c h w')\n        encoder_posterior, hidden_states = model.first_stage_model.encode(x, return_hidden_states=True)\n\n        hidden_states_first_last = []\n        ### use only the first and last hidden states\n        for hid in hidden_states:\n            hid = rearrange(hid, '(b t) c h w -> b c t h w', t=t)\n            hid_new = torch.cat([hid[:, :, 0:1], hid[:, :, -1:]], dim=2)\n            hidden_states_first_last.append(hid_new)\n\n        z = model.get_first_stage_encoding(encoder_posterior).detach()\n        z = rearrange(z, '(b t) c h w -> b c t h w', b=b, t=t)\n        return z, hidden_states_first_last\nif __name__ == '__main__':\n    i2v = Image2Video()\n    video_path = i2v.get_image('prompts/art.png','man fishing in a boat at sunset')\n    print('done', video_path)"
  },
  {
    "path": "ToonCrafter/scripts/run.sh",
    "content": "\nckpt=checkpoints/tooncrafter_512_interp_v1/model.ckpt\nconfig=configs/inference_512_v1.0.yaml\n\nprompt_dir=prompts/512_interp/\nres_dir=\"results\"\n\nFS=10 ## This model adopts FPS=5, range recommended: 5-30 (smaller value -> larger motion)\n\n\n\nseed=123\nname=tooncrafter_512_interp_seed${seed}\nCUDA_VISIBLE_DEVICES=0 python3 scripts/evaluation/inference.py \\\n--seed ${seed} \\\n--ckpt_path $ckpt \\\n--config $config \\\n--savedir $res_dir/$name \\\n--n_samples 1 \\\n--bs 1 --height 320 --width 512 \\\n--unconditional_guidance_scale 7.5 \\\n--ddim_steps 50 \\\n--ddim_eta 1.0 \\\n--prompt_dir $prompt_dir \\\n--text_input \\\n--video_length 16 \\\n--frame_stride ${FS} \\\n--timestep_spacing 'uniform_trailing' --guidance_rescale 0.7 --perframe_ae --interp\n"
  },
  {
    "path": "ToonCrafter/utils/__init__.py",
    "content": ""
  },
  {
    "path": "ToonCrafter/utils/save_video.py",
    "content": "import os\nimport numpy as np\nfrom tqdm import tqdm\nfrom PIL import Image\nfrom einops import rearrange\n\nimport torch\nimport torchvision\nfrom torch import Tensor\nfrom torchvision.utils import make_grid\nfrom torchvision.transforms.functional import to_tensor\n\n\ndef frames_to_mp4(frame_dir,output_path,fps):\n    def read_first_n_frames(d: os.PathLike, num_frames: int):\n        if num_frames:\n            images = [Image.open(os.path.join(d, f)) for f in sorted(os.listdir(d))[:num_frames]]\n        else:\n            images = [Image.open(os.path.join(d, f)) for f in sorted(os.listdir(d))]\n        images = [to_tensor(x) for x in images]\n        return torch.stack(images)\n    videos = read_first_n_frames(frame_dir, num_frames=None)\n    videos = videos.mul(255).to(torch.uint8).permute(0, 2, 3, 1)\n    torchvision.io.write_video(output_path, videos, fps=fps, video_codec='h264', options={'crf': '10'})\n\n\ndef tensor_to_mp4(video, savepath, fps, rescale=True, nrow=None):\n    \"\"\"\n    video: torch.Tensor, b,c,t,h,w, 0-1\n    if -1~1, enable rescale=True\n    \"\"\"\n    n = video.shape[0]\n    video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w\n    nrow = int(np.sqrt(n)) if nrow is None else nrow\n    frame_grids = [torchvision.utils.make_grid(framesheet, nrow=nrow, padding=0) for framesheet in video] # [3, grid_h, grid_w]\n    grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [T, 3, grid_h, grid_w]\n    grid = torch.clamp(grid.float(), -1., 1.)\n    if rescale:\n        grid = (grid + 1.0) / 2.0\n    grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1) # [T, 3, grid_h, grid_w] -> [T, grid_h, grid_w, 3]\n    torchvision.io.write_video(savepath, grid, fps=fps, video_codec='h264', options={'crf': '10'})\n\n    \ndef tensor2videogrids(video, root, filename, fps, rescale=True, clamp=True):\n    assert(video.dim() == 5) # b,c,t,h,w\n    assert(isinstance(video, torch.Tensor))\n\n    video = video.detach().cpu()\n    if clamp:\n        video = torch.clamp(video, -1., 1.)\n    n = video.shape[0]\n    video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w\n    frame_grids = [torchvision.utils.make_grid(framesheet, nrow=int(np.sqrt(n))) for framesheet in video] # [3, grid_h, grid_w]\n    grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [T, 3, grid_h, grid_w]\n    if rescale:\n        grid = (grid + 1.0) / 2.0\n    grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1) # [T, 3, grid_h, grid_w] -> [T, grid_h, grid_w, 3]\n    path = os.path.join(root, filename)\n    torchvision.io.write_video(path, grid, fps=fps, video_codec='h264', options={'crf': '10'})\n\n\ndef log_local(batch_logs, save_dir, filename, save_fps=10, rescale=True):\n    if batch_logs is None:\n        return None\n    \"\"\" save images and videos from images dict \"\"\"\n    def save_img_grid(grid, path, rescale):\n        if rescale:\n                grid = (grid + 1.0) / 2.0  # -1,1 -> 0,1; c,h,w\n        grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)\n        grid = grid.numpy()\n        grid = (grid * 255).astype(np.uint8)\n        os.makedirs(os.path.split(path)[0], exist_ok=True)\n        Image.fromarray(grid).save(path)\n\n    for key in batch_logs:\n        value = batch_logs[key]\n        if isinstance(value, list) and isinstance(value[0], str):\n            ## a batch of captions\n            path = os.path.join(save_dir, \"%s-%s.txt\"%(key, filename))\n            with open(path, 'w') as f:\n                for i, txt in enumerate(value):\n                    f.write(f'idx={i}, txt={txt}\\n')\n                f.close()\n        elif isinstance(value, torch.Tensor) and value.dim() == 5:\n            ## save video grids\n            video = value # b,c,t,h,w\n            ## only save grayscale or rgb mode\n            if video.shape[1] != 1 and video.shape[1] != 3:\n                continue\n            n = video.shape[0]\n            video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w\n            frame_grids = [torchvision.utils.make_grid(framesheet, nrow=int(1), padding=0) for framesheet in video] #[3, n*h, 1*w]\n            grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [t, 3, n*h, w]\n            if rescale:\n                grid = (grid + 1.0) / 2.0\n            grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1)\n            path = os.path.join(save_dir, \"%s-%s.mp4\"%(key, filename))\n            torchvision.io.write_video(path, grid, fps=save_fps, video_codec='h264', options={'crf': '10'})\n            \n            ## save frame sheet\n            img = value\n            video_frames = rearrange(img, 'b c t h w -> (b t) c h w')\n            t = img.shape[2]\n            grid = torchvision.utils.make_grid(video_frames, nrow=t, padding=0)\n            path = os.path.join(save_dir, \"%s-%s.jpg\"%(key, filename))\n            #save_img_grid(grid, path, rescale)\n        elif isinstance(value, torch.Tensor) and value.dim() == 4:\n            ## save image grids\n            img = value\n            ## only save grayscale or rgb mode\n            if img.shape[1] != 1 and img.shape[1] != 3:\n                continue\n            n = img.shape[0]\n            grid = torchvision.utils.make_grid(img, nrow=1, padding=0)\n            path = os.path.join(save_dir, \"%s-%s.jpg\"%(key, filename))\n            save_img_grid(grid, path, rescale)\n        else:\n            pass\n\ndef prepare_to_log(batch_logs, max_images=100000, clamp=True):\n    if batch_logs is None:\n        return None\n    # process\n    for key in batch_logs:\n        N = batch_logs[key].shape[0] if hasattr(batch_logs[key], 'shape') else len(batch_logs[key])\n        N = min(N, max_images)\n        batch_logs[key] = batch_logs[key][:N]\n        ## in batch_logs: images <batched tensor> & caption <text list>\n        if isinstance(batch_logs[key], torch.Tensor):\n            batch_logs[key] = batch_logs[key].detach().cpu()\n            if clamp:\n                try:\n                    batch_logs[key] = torch.clamp(batch_logs[key].float(), -1., 1.)\n                except RuntimeError:\n                    print(\"clamp_scalar_cpu not implemented for Half\")\n    return batch_logs\n\n# ----------------------------------------------------------------------------------------------\n\ndef fill_with_black_squares(video, desired_len: int) -> Tensor:\n    if len(video) >= desired_len:\n        return video\n\n    return torch.cat([\n        video,\n        torch.zeros_like(video[0]).unsqueeze(0).repeat(desired_len - len(video), 1, 1, 1),\n    ], dim=0)\n\n# ----------------------------------------------------------------------------------------------\ndef load_num_videos(data_path, num_videos):\n    # first argument can be either data_path of np array \n    if isinstance(data_path, str):\n        videos = np.load(data_path)['arr_0'] # NTHWC\n    elif isinstance(data_path, np.ndarray):\n        videos = data_path\n    else:\n        raise Exception\n\n    if num_videos is not None:\n        videos = videos[:num_videos, :, :, :, :]\n    return videos\n\ndef npz_to_video_grid(data_path, out_path, num_frames, fps, num_videos=None, nrow=None, verbose=True):\n    # videos = torch.tensor(np.load(data_path)['arr_0']).permute(0,1,4,2,3).div_(255).mul_(2) - 1.0 # NTHWC->NTCHW, np int -> torch tensor 0-1\n    if isinstance(data_path, str):\n        videos = load_num_videos(data_path, num_videos)\n    elif isinstance(data_path, np.ndarray):\n        videos = data_path\n    else:\n        raise Exception\n    n,t,h,w,c = videos.shape\n    videos_th = []\n    for i in range(n):\n        video = videos[i, :,:,:,:]\n        images = [video[j, :,:,:] for j in range(t)]\n        images = [to_tensor(img) for img in images]\n        video = torch.stack(images)\n        videos_th.append(video)\n    if verbose:\n        videos = [fill_with_black_squares(v, num_frames) for v in tqdm(videos_th, desc='Adding empty frames')] # NTCHW\n    else:\n        videos = [fill_with_black_squares(v, num_frames) for v in videos_th] # NTCHW\n\n    frame_grids = torch.stack(videos).permute(1, 0, 2, 3, 4) # [T, N, C, H, W]\n    if nrow is None:\n        nrow = int(np.ceil(np.sqrt(n)))\n    if verbose:\n        frame_grids = [make_grid(fs, nrow=nrow) for fs in tqdm(frame_grids, desc='Making grids')]\n    else:\n        frame_grids = [make_grid(fs, nrow=nrow) for fs in frame_grids]\n\n    if os.path.dirname(out_path) != \"\":\n        os.makedirs(os.path.dirname(out_path), exist_ok=True)\n    frame_grids = (torch.stack(frame_grids) * 255).to(torch.uint8).permute(0, 2, 3, 1) # [T, H, W, C]\n    torchvision.io.write_video(out_path, frame_grids, fps=fps, video_codec='h264', options={'crf': '10'})\n"
  },
  {
    "path": "ToonCrafter/utils/utils.py",
    "content": "import os\nimport importlib\nimport numpy as np\nimport cv2\nimport torch\nimport torch.distributed as dist\n\n\ndef count_params(model, verbose=False):\n    total_params = sum(p.numel() for p in model.parameters())\n    if verbose:\n        print(f\"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.\")\n    return total_params\n\n\ndef check_istarget(name, para_list):\n    \"\"\"\n    name: full name of source para\n    para_list: partial name of target para\n    \"\"\"\n    istarget = False\n    for para in para_list:\n        if para in name:\n            return True\n    return istarget\n\n\ndef instantiate_from_config(config):\n    if \"target\" not in config:\n        if config == '__is_first_stage__':\n            return None\n        elif config == \"__is_unconditional__\":\n            return None\n        raise KeyError(\"Expected key `target` to instantiate.\")\n    return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))\n\n\ndef get_obj_from_str(string, reload=False):\n    module, cls = string.rsplit(\".\", 1)\n    if reload:\n        module_imp = importlib.import_module(module)\n        importlib.reload(module_imp)\n    return getattr(importlib.import_module(module, package=None), cls)\n\n\ndef load_npz_from_dir(data_dir):\n    data = [np.load(os.path.join(data_dir, data_name))['arr_0'] for data_name in os.listdir(data_dir)]\n    data = np.concatenate(data, axis=0)\n    return data\n\n\ndef load_npz_from_paths(data_paths):\n    data = [np.load(data_path)['arr_0'] for data_path in data_paths]\n    data = np.concatenate(data, axis=0)\n    return data\n\n\ndef resize_numpy_image(image, max_resolution=512 * 512, resize_short_edge=None):\n    h, w = image.shape[:2]\n    if resize_short_edge is not None:\n        k = resize_short_edge / min(h, w)\n    else:\n        k = max_resolution / (h * w)\n        k = k**0.5\n    h = int(np.round(h * k / 64)) * 64\n    w = int(np.round(w * k / 64)) * 64\n    image = cv2.resize(image, (w, h), interpolation=cv2.INTER_LANCZOS4)\n    return image\n\n\ndef setup_dist(args):\n    if dist.is_initialized():\n        return\n    torch.cuda.set_device(args.local_rank)\n    torch.distributed.init_process_group(\n        'nccl',\n        init_method='env://'\n    )\n"
  },
  {
    "path": "__init__.py",
    "content": "import os\nimport sys\nimport torch\nimport time\nimport logging as logger\nimport importlib\n\nfrom functools import cache\nfrom pathlib import Path\nfrom contextlib import contextmanager, ExitStack\nfrom omegaconf import OmegaConf\nfrom huggingface_hub import hf_hub_download\nfrom einops import repeat, rearrange\nfrom torchvision import transforms\nfrom pytorch_lightning import seed_everything\nfrom platform import system\nfrom comfy import model_management as mm\nfrom comfy.utils import ProgressBar\n\nif system() == \"Darwin\":\n    os.environ[\"PYTORCH_ENABLE_MPS_FALLBACK\"] = \"1\"\n    os.environ[\"HF_HOME\"] = \"~/.cache/huggingface\"\nos.environ[\"XFORMERS_FORCE_DISABLE_TRITON\"] = \"1\"\nUSER_DEF_CLIP = Path(__file__).parent.joinpath(\"models/open_clip_pytorch_model.bin\")\nif USER_DEF_CLIP.exists():\n    os.environ[\"USER_DEF_CLIP\"] = USER_DEF_CLIP.as_posix()\n# os.environ[\"no_proxy\"] = \"localhost, 127.0.0.1, ::1\"\nROOT = Path(__file__).parent.joinpath(\"ToonCrafter\")\nsys.path.append(Path(__file__).parent.as_posix())\nsys.path.append(ROOT.as_posix())\n# from ToonCrafter.utils.utils import instantiate_from_config\nfrom ToonCrafter.scripts.evaluation.funcs import load_model_checkpoint, batch_ddim_sampling\n# from ToonCrafter.cldm.model import load_state_dict\n\n\ndef instantiate_from_config(config):\n    if \"target\" not in config:\n        if config == '__is_first_stage__':\n            return None\n        elif config == \"__is_unconditional__\":\n            return None\n        raise KeyError(\"Expected key `target` to instantiate.\")\n    return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))\n\n\ndef get_obj_from_str(string, reload=False):\n    module, cls = string.rsplit(\".\", 1)\n    if reload:\n        module_imp = importlib.import_module(module)\n        importlib.reload(module_imp)\n    return getattr(importlib.import_module(module, package=None), cls)\n\n\ndef get_state_dict(d):\n    return d.get('state_dict', d)\n\n\ndef load_state_dict(ckpt_path, location='cpu'):\n    _, extension = os.path.splitext(ckpt_path)\n    if extension.lower() == \".safetensors\":\n        import safetensors.torch\n        state_dict = safetensors.torch.load_file(ckpt_path, device=location)\n    else:\n        state_dict = get_state_dict(torch.load(ckpt_path, map_location=torch.device(location)))\n    state_dict = get_state_dict(state_dict)\n    print(f'Loaded state_dict from [{ckpt_path}]')\n    return state_dict\n\n\n@cache\ndef get_models(root: Path = ROOT.joinpath(\"checkpoints\"), ignoreed: tuple = (\"sketch_encoder.ckpt\", )):\n    ckpts = []\n    files = []\n    for ext in ['ckpt', 'pt', 'bin', 'pth', 'safetensors', 'pkl']:\n        files.extend(root.rglob(f\"*.{ext}\"))\n    for file in files:\n        if file.name in ignoreed:\n            continue\n        ckpts.append(file.relative_to(root).as_posix())\n    return sorted(ckpts)\n\n\nclass ToonCrafterNode:\n\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"image\": (\"IMAGE\", ),\n                \"image2\": (\"IMAGE\", ),\n                \"ckpt_name\": (get_models(), ),\n                \"vram_opt_strategy\": ([\"none\", \"low\"], ),\n                \"prompt\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\n                # \"clip\": (\"CLIP\", ),\n                \"seed\": (\"INT\", {\"default\": 123, \"min\": 0, \"max\": 0xffffffffffffffff}),\n                \"eta\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 15.0, \"step\": 0.1}),\n                \"cfg_scale\": (\"FLOAT\", {\"default\": 7.5, \"min\": 1.0, \"max\": 15.0, \"step\": 0.5}),\n                \"steps\": (\"INT\", {\"default\": 50, \"min\": 1, \"max\": 60, \"step\": 1}),\n                \"frame_count\": (\"INT\", {\"default\": 10, \"min\": 5, \"max\": 30, \"step\": 1}),\n                \"fps\": (\"INT\", {\"default\": 8, \"min\": 1, \"max\": 60, \"step\": 1}),\n            }\n        }\n\n    RETURN_TYPES = (\"IMAGE\", )\n    FUNCTION = \"get_image\"\n\n    OUTPUT_NODE = True\n\n    CATEGORY = \"ToonCrafter\"\n\n    def init(self, ckpt_name=\"\", result_dir=ROOT.joinpath(\"tmp/\"), gpu_num=1, resolution='320_512') -> None:\n        h, w = resolution.split('_')\n        self.resolution = int(h), int(w)\n        # self.download_model()\n\n        self.result_dir = result_dir\n        Path(self.result_dir).mkdir(parents=True, exist_ok=True)\n        ckpt_path = ROOT.joinpath(\"checkpoints\", ckpt_name)\n        if not ckpt_path.exists():\n            ckpt_path = ROOT.joinpath(f'checkpoints/tooncrafter_{w}_interp_v1', 'model.ckpt')\n        if not ckpt_path.exists():\n            raise Exception(f\"ToonCrafterNode Error: {ckpt_path} Not Found!\")\n        config_file = ROOT.joinpath(f'configs/inference_{w}_v1.0.yaml')\n        config = OmegaConf.load(config_file.as_posix())\n        model_config = config.pop(\"model\", OmegaConf.create())\n        model_config['params']['unet_config']['params']['use_checkpoint'] = False\n        model_list = []\n        # mm.unload_all_models()\n        for gpu_id in range(gpu_num):\n            model = instantiate_from_config(model_config)\n            # model = model.cuda(gpu_id)\n            logger.info(ckpt_path)\n            assert ckpt_path.exists(), \"Error: checkpoint Not Found!\"\n            model = load_model_checkpoint(model, ckpt_path.as_posix())\n            model.eval()\n            model_list.append(model)\n        self.model_list = model_list\n        self.save_fps = 8\n        self.is_cuda = torch.cuda.is_available()\n        self.is_mps = torch.backends.mps.is_available()\n        self.is_cpu = torch.cpu.is_available()\n\n    @contextmanager\n    def optional_autocast(device):\n        try:\n            with torch.autocast(device.type):\n                yield\n        except Exception as e:\n            print(f\"Autocast is not supported: {e}\")\n            yield\n\n    def get_image(self, image: torch.Tensor, ckpt_name, vram_opt_strategy, prompt, steps=50, cfg_scale=7.5, eta=1.0, frame_count=3, fps=8, seed=123, image2: torch.Tensor = None):\n        os.environ[\"TOON_MEM_STRATEGY\"] = vram_opt_strategy\n        self.init(ckpt_name=ckpt_name)\n        self.save_fps = fps\n        seed = seed % 4294967295\n        seed_everything(seed)\n        transform = transforms.Compose([\n            transforms.Resize(min(self.resolution)),\n            transforms.CenterCrop(self.resolution),\n        ])\n        mm.soft_empty_cache()\n        print('start:', prompt, time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))\n        start = time.time()\n        gpu_id = 0\n        if steps > 60:\n            steps = 60\n        model: torch.nn.Module = self.model_list[gpu_id]\n        half = mm.should_use_bf16() or mm.should_use_fp16() or vram_opt_strategy == \"low\"\n        if half:\n            model = model.half()\n            image = image.half()\n            image2 = image2.half()\n        if self.is_cuda:\n            model = model.to('cuda')\n        elif self.is_mps:\n            model = model.to('mps')\n        elif self.is_cpu:\n            model = model.to('cpu')\n        batch_size = 1\n        channels = model.model.diffusion_model.out_channels\n        frames = model.temporal_length\n        h, w = self.resolution[0] // 8, self.resolution[1] // 8\n        noise_shape = [batch_size, channels, frames, h, w]\n        pbar = ProgressBar(steps)\n        # text cond\n        with ExitStack() as stack:\n            stack.enter_context(torch.no_grad())\n            if self.is_cuda:\n                stack.enter_context(torch.cuda.amp.autocast())\n            # stack.enter_context(self.optional_autocast(device=model.device))\n            text_emb = model.get_learned_conditioning([prompt])\n            model.cond_stage_model.to(\"cpu\")\n            # img cond\n            img_tensor = image[0].permute(2, 0, 1).to(model.device)\n            img_tensor = (img_tensor - 0.5) * 2\n\n            image_tensor_resized = transform(img_tensor)  # 3,h,w\n            videos = image_tensor_resized.unsqueeze(0).unsqueeze(2)  # bc1hw\n\n            # z = get_latent_z(model, videos) #bc,1,hw\n            videos = repeat(videos, 'b c t h w -> b c (repeat t) h w', repeat=frames // 2)\n            img_tensor2 = image2[0].permute(2, 0, 1).to(model.device)\n            img_tensor2 = (img_tensor2 - 0.5) * 2\n            image_tensor_resized2 = transform(img_tensor2)  # 3,h,w\n            videos2 = image_tensor_resized2.unsqueeze(0).unsqueeze(2)  # bchw\n            videos2 = repeat(videos2, 'b c t h w -> b c (repeat t) h w', repeat=frames // 2)\n\n            videos = torch.cat([videos, videos2], dim=2)\n            # v10 = torch.mps.driver_allocated_memory() / 1024**3\n            mm.soft_empty_cache()\n            # v11 = torch.mps.driver_allocated_memory() / 1024**3\n            z, hs = self.get_latent_z_with_hidden_states(model, videos)\n            model.cond_stage_model.to(model.device)\n            # v20 = torch.mps.driver_allocated_memory() / 1024**3\n            mm.soft_empty_cache()\n            # v21 = torch.mps.driver_allocated_memory() / 1024**3\n\n            img_tensor_repeat = torch.zeros_like(z).to(dtype=model.dtype)\n\n            img_tensor_repeat[:, :, :1, :, :] = z[:, :, :1, :, :]\n            img_tensor_repeat[:, :, -1:, :, :] = z[:, :, -1:, :, :]\n\n            cond_images = model.embedder(img_tensor.unsqueeze(0))  # blc\n            img_emb = model.image_proj_model(cond_images)\n\n            imtext_cond = torch.cat([text_emb, img_emb], dim=1)\n\n            del cond_images, text_emb, img_emb, videos, videos2, image_tensor_resized2, img_tensor2, image_tensor_resized, image\n            fs = torch.tensor([frame_count], dtype=torch.long, device=model.device)\n            cond = {\"c_crossattn\": [imtext_cond], \"fs\": fs, \"c_concat\": [img_tensor_repeat]}\n\n            def cb(step):\n                print(f\"step: {step}\", end='\\r')\n                pbar.update_absolute(step + 1)\n\n            mm.soft_empty_cache()\n            # inference\n            batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale, hs=hs, callback=cb)\n\n            # remove the last frame\n            if image2 is None:\n                batch_samples = batch_samples[:, :, :, :-1, ...]\n            # b,samples,c,t,h,w\n            prompt_str = prompt.replace(\"/\", \"_slash_\") if \"/\" in prompt else prompt\n            prompt_str = prompt_str.replace(\" \", \"_\") if \" \" in prompt else prompt_str\n            prompt_str = prompt_str[:40]\n            if len(prompt_str) == 0:\n                prompt_str = 'empty_prompt'\n\n        # self.save_videos(batch_samples, self.result_dir, filenames=[prompt_str], fps=self.save_fps)\n        print(f\"Saved in {prompt_str}. Time used: {(time.time() - start):.2f} seconds\")\n        try:\n            # frame_count, width, height, channel\n            batch_samples = batch_samples[0][0].permute(1, 2, 3, 0)\n            if half:\n                batch_samples = batch_samples.to(dtype=torch.float32)\n        except Exception as e:\n            sys.stderr.write(f\"{e}\\n\")\n            return (None, )\n        batch_samples = (batch_samples + 1.0) * 0.5\n        mm.soft_empty_cache()\n        model = model.cpu()\n        return (batch_samples, )\n\n    def save_videos(self, batch_tensors, savedir, filenames, fps=10):\n        import torchvision\n        # b,samples,c,t,h,w\n        n_samples = batch_tensors.shape[1]\n        for idx, vid_tensor in enumerate(batch_tensors):\n            video = vid_tensor.detach().cpu()\n            video = torch.clamp(video.float(), -1., 1.)\n            video = video.permute(2, 0, 1, 3, 4)  # t,n,c,h,w\n            frame_grids = [torchvision.utils.make_grid(framesheet, nrow=int(n_samples)) for framesheet in video]  # [3, 1*h, n*w]\n            grid = torch.stack(frame_grids, dim=0)  # stack in temporal dim [t, 3, n*h, w]\n            grid = (grid + 1.0) / 2.0\n            grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1)\n            savepath = os.path.join(savedir, f\"{filenames[idx]}.mp4\")\n            torchvision.io.write_video(savepath, grid, fps=fps, video_codec='h264', options={'crf': '10'})\n\n    def download_model(self):\n        REPO_ID = 'Doubiiu/ToonCrafter'\n        filename_list = ['model.ckpt']\n        model_dir = ROOT.joinpath('checkpoints/tooncrafter_' + str(self.resolution[1]) + '_interp_v1/')\n        model_dir.mkdir(parents=True, exist_ok=True)\n        for filename in filename_list:\n            local_file = model_dir.joinpath(filename)\n            if not local_file.exists():\n                hf_hub_download(repo_id=REPO_ID, filename=filename, local_dir=model_dir.as_posix(), local_dir_use_symlinks=False)\n\n    def get_latent_z_with_hidden_states(self, model, videos):\n        b, c, t, h, w = videos.shape\n        x = rearrange(videos, 'b c t h w -> (b t) c h w')\n        encoder_posterior, hidden_states = model.first_stage_model.encode(x, return_hidden_states=True)\n\n        hidden_states_first_last = []\n        # use only the first and last hidden states\n        for hid in hidden_states:\n            hid = rearrange(hid, '(b t) c h w -> b c t h w', t=t)\n            hid_new = torch.cat([hid[:, :, 0:1], hid[:, :, -1:]], dim=2)\n            hidden_states_first_last.append(hid_new)\n\n        z = model.get_first_stage_encoding(encoder_posterior).detach()\n        z = rearrange(z, '(b t) c h w -> b c t h w', b=b, t=t)\n        return z, hidden_states_first_last\n\n\nclass ToonCrafterWithSketch(ToonCrafterNode):\n\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"image\": (\"IMAGE\", ),\n                \"image2\": (\"IMAGE\", ),\n                \"frame_guides\": (\"IMAGE\", ),\n                \"ckpt_name\": (get_models(), ),\n                \"vram_opt_strategy\": ([\"none\", \"low\"], ),\n                \"prompt\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\n                \"seed\": (\"INT\", {\"default\": 123, \"min\": 0, \"max\": 0xffffffffffffffff}),\n                \"eta\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 15.0, \"step\": 0.1}),\n                \"cfg_scale\": (\"FLOAT\", {\"default\": 7.5, \"min\": 1.0, \"max\": 15.0, \"step\": 0.5}),\n                \"steps\": (\"INT\", {\"default\": 50, \"min\": 1, \"max\": 60, \"step\": 1}),\n                \"frame_count\": (\"INT\", {\"default\": 10, \"min\": 5, \"max\": 30, \"step\": 1}),\n                \"fps\": (\"INT\", {\"default\": 8, \"min\": 1, \"max\": 60, \"step\": 1}),\n                \"control_scale\": (\"FLOAT\", {\"default\": 0.6, \"min\": 0, \"max\": 1.0, \"step\": 0.1}),\n            }\n        }\n\n    RETURN_TYPES = (\"IMAGE\", )\n    FUNCTION = \"get_image\"\n\n    OUTPUT_NODE = True\n\n    CATEGORY = \"ToonCrafter\"\n\n    def init(self, ckpt_name=\"\", result_dir=ROOT.joinpath(\"tmp/\"), gpu_num=1, resolution='320_512') -> None:\n        h, w = resolution.split('_')\n        self.resolution = int(h), int(w)\n\n        self.result_dir = result_dir\n        Path(self.result_dir).mkdir(parents=True, exist_ok=True)\n        ckpt_path = ROOT.joinpath(\"checkpoints\", ckpt_name)\n        if not ckpt_path.exists():\n            ckpt_path = ROOT.joinpath(f'checkpoints/tooncrafter_{w}_interp_v1', 'model.ckpt')\n        if not ckpt_path.exists():\n            raise Exception(f\"ToonCrafterWithSketch Error: {ckpt_path} Not Found!\")\n        config_file = ROOT.joinpath(f'configs/inference_{w}_v1.0.yaml')\n        config = OmegaConf.load(config_file.as_posix())\n        model_config = config.pop(\"model\", OmegaConf.create())\n        model_config['params']['unet_config']['params']['use_checkpoint'] = False\n        model_list = []\n\n        # ControlModel\n        cn_ckpt_path = ROOT.joinpath(\"checkpoints\", \"sketch_encoder.ckpt\")\n        cn_config_file = ROOT.joinpath(\"configs/cldm_v21.yaml\")\n        cn_config = OmegaConf.load(cn_config_file.as_posix())\n        cn_model_config = cn_config.pop(\"control_stage_config\", OmegaConf.create())\n        self.is_cuda = torch.cuda.is_available()\n        self.is_mps = torch.backends.mps.is_available()\n        self.is_cpu = torch.cpu.is_available()\n        self.device = \"cuda\" if self.is_cuda else \"mps\" if self.is_mps else \"cpu\"\n        model_list = []\n        for gpu_id in range(gpu_num):\n            model = instantiate_from_config(model_config)\n            cn_model = instantiate_from_config(cn_model_config)\n\n            # model = model.cuda(gpu_id)\n            assert ckpt_path.exists(), \"Error: checkpoint Not Found!\"\n            model = load_model_checkpoint(model, ckpt_path)\n            model.eval()\n\n            cn_model.load_state_dict(load_state_dict(cn_ckpt_path, location=self.device))\n            cn_model.eval()\n            model.control_model = cn_model\n            model_list.append(model)\n        self.model_list = model_list\n        self.save_fps = 8\n\n    def get_image(self, image: torch.Tensor, ckpt_name, vram_opt_strategy, prompt, steps=50, cfg_scale=7.5, eta=1.0, frame_count=3, fps=8, seed=123, image2: torch.Tensor = None, frame_guides=None, control_scale=0.6):\n        os.environ[\"TOON_MEM_STRATEGY\"] = vram_opt_strategy\n        self.init(ckpt_name=ckpt_name)\n        control_frames = frame_guides\n        self.save_fps = fps\n        seed = seed % 4294967295\n        seed_everything(seed)\n        transform = transforms.Compose([\n            transforms.Resize(min(self.resolution)),\n            transforms.CenterCrop(self.resolution),\n        ])\n        mm.soft_empty_cache()\n        print('start:', prompt, time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))\n        start = time.time()\n        gpu_id = 0\n        if steps > 60:\n            steps = 60\n        model: torch.nn.Module = self.model_list[gpu_id]\n        if self.is_cuda:\n            model = model.to('cuda')\n        elif self.is_mps:\n            model = model.to('mps')\n        elif self.is_cpu:\n            model = model.to('cpu')\n        half = mm.should_use_bf16() or mm.should_use_fp16() or vram_opt_strategy == \"low\"\n        if half:\n            model = model.half()\n            model.control_model.dtype = model.dtype\n            image = image.half()\n            image2 = image2.half()\n            control_frames = control_frames.half()\n        batch_size = 1\n        channels = model.model.diffusion_model.out_channels\n        frames = model.temporal_length\n        h, w = self.resolution[0] // 8, self.resolution[1] // 8\n        noise_shape = [batch_size, channels, frames, h, w]\n        pbar = ProgressBar(steps)\n        # text cond\n        with ExitStack() as stack:\n            stack.enter_context(torch.no_grad())\n            if self.is_cuda:\n                stack.enter_context(torch.cuda.amp.autocast())\n            text_emb = model.get_learned_conditioning([prompt])\n\n            # control cond\n            if frame_guides is not None:\n                cn_videos = []\n                for frame in control_frames:\n                    # frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n                    # frame = cv2.bitwise_not(frame)\n                    cn_tensor = frame.permute(2, 0, 1).to(model.device)\n\n                    # cn_tensor = (cn_tensor / 255. - 0.5) * 2\n                    # cn_tensor = (cn_tensor / 255.0)\n                    cn_tensor_resized = transform(cn_tensor)  # 3,h,w\n\n                    cn_video = cn_tensor_resized.unsqueeze(0).unsqueeze(2)  # bc1hw\n                    cn_videos.append(cn_video)\n\n                cn_videos = torch.cat(cn_videos, dim=2)\n                del control_frames\n                model_list = []\n                for model in self.model_list:\n                    model.control_scale = control_scale\n                    model_list.append(model)\n                self.model_list = model_list\n\n            else:\n                cn_videos = None\n\n            # img cond\n            img_tensor = image[0].permute(2, 0, 1).to(model.device)\n            img_tensor = (img_tensor - 0.5) * 2\n            # img_tensor = torch.from_numpy(image).permute(2, 0, 1).float().to(model.device)\n            # img_tensor = (img_tensor / 255. - 0.5) * 2\n\n            image_tensor_resized = transform(img_tensor)  # 3,h,w\n            videos = image_tensor_resized.unsqueeze(0).unsqueeze(2)  # bc1hw\n\n            # z = get_latent_z(model, videos) #bc,1,hw\n            videos = repeat(videos, 'b c t h w -> b c (repeat t) h w', repeat=frames // 2)\n            img_tensor2 = image2[0].permute(2, 0, 1).to(model.device)\n            img_tensor2 = (img_tensor2 - 0.5) * 2\n            # img_tensor2 = torch.from_numpy(image2).permute(2, 0, 1).float().to(model.device)\n            # img_tensor2 = (img_tensor2 / 255. - 0.5) * 2\n\n            image_tensor_resized2 = transform(img_tensor2)  # 3,h,w\n            videos2 = image_tensor_resized2.unsqueeze(0).unsqueeze(2)  # bchw\n            videos2 = repeat(videos2, 'b c t h w -> b c (repeat t) h w', repeat=frames // 2)\n\n            videos = torch.cat([videos, videos2], dim=2)\n            mm.soft_empty_cache()\n            z, hs = self.get_latent_z_with_hidden_states(model, videos)\n            model.cond_stage_model.to(model.device)\n            mm.soft_empty_cache()\n            # img_tensor_repeat = torch.zeros_like(z)\n            img_tensor_repeat = torch.zeros_like(z).to(dtype=model.dtype)\n\n            img_tensor_repeat[:, :, :1, :, :] = z[:, :, :1, :, :]\n            img_tensor_repeat[:, :, -1:, :, :] = z[:, :, -1:, :, :]\n\n            cond_images = model.embedder(img_tensor.unsqueeze(0))  # blc\n            img_emb = model.image_proj_model(cond_images)\n\n            imtext_cond = torch.cat([text_emb, img_emb], dim=1)\n\n            del cond_images, text_emb, img_emb, videos, videos2, image_tensor_resized2, img_tensor2, image_tensor_resized, image\n            fs = torch.tensor([frame_count], dtype=torch.long, device=model.device)\n            cond = {\"c_crossattn\": [imtext_cond], \"fs\": fs, \"c_concat\": [img_tensor_repeat], \"control_cond\": cn_videos}\n\n            def cb(step):\n                print(f\"step: {step}\", end='\\r')\n                pbar.update_absolute(step + 1)\n\n            mm.soft_empty_cache()\n            # inference\n            batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale, hs=hs, callback=cb)\n\n            # remove the last frame\n            if image2 is None:\n                batch_samples = batch_samples[:, :, :, :-1, ...]\n            # b,samples,c,t,h,w\n            prompt_str = prompt.replace(\"/\", \"_slash_\") if \"/\" in prompt else prompt\n            prompt_str = prompt_str.replace(\" \", \"_\") if \" \" in prompt else prompt_str\n            prompt_str = prompt_str[:40]\n            if len(prompt_str) == 0:\n                prompt_str = 'empty_prompt'\n\n        # self.save_videos(batch_samples, self.result_dir, filenames=[prompt_str], fps=self.save_fps)\n        print(f\"Saved in {prompt_str}. Time used: {(time.time() - start):.2f} seconds\")\n        try:\n            # frame_count, width, height, channel\n            batch_samples = batch_samples[0][0].permute(1, 2, 3, 0)\n            if half:\n                batch_samples = batch_samples.to(dtype=torch.float32)\n        except Exception as e:\n            sys.stderr.write(f\"{e}\\n\")\n            return (None, )\n        batch_samples = (batch_samples + 1.0) * 0.5\n        mm.soft_empty_cache()\n        model = model.cpu()\n        return (batch_samples, )\n\n\nNODE_CLASS_MAPPINGS = {\n    \"ToonCrafterNode\": ToonCrafterNode,\n    \"ToonCrafterWithSketch\": ToonCrafterWithSketch,\n}\n\n\nNODE_DISPLAY_NAME_MAPPINGS = {\n    \"ToonCrafterNode\": \"ToonCrafter\",\n    \"ToonCrafterWithSketch\": \"ToonCrafterWithSketch\",\n}\n\nWEB_DIRECTORY = \"./\"\n"
  },
  {
    "path": "pre_run.py",
    "content": "import sys\nimport argparse\nimport os\nfrom pathlib import Path\nsys.path.append(Path(__file__).parent.as_posix())\nfrom scripts.evaluation.inference import seed_everything, run_inference, get_parser\n\n\ndef run():\n    old_dir = os.getcwd()\n    os.chdir(Path(__file__).parent.as_posix())\n    parser = get_parser()\n    ckpt = \"checkpoints/tooncrafter_512_interp_v1/model.ckpt\"\n    config = \"configs/inference_512_v1.0.yaml\"\n\n    prompt_dir = \"prompts/512_interp/\"\n    res_dir = \"results\"\n\n    FS = 10  # This model adopts FPS=5, range recommended: 5-30 (smaller value -> larger motion)\n\n    seed = 123\n    name = f\"tooncrafter_512_interp_seed{seed}\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n    namespace = argparse.Namespace(\n        seed=123,\n        ckpt_path=ckpt,\n        config=config,\n        savedir=f\"{res_dir}/{name}\",\n        n_samples=1,\n        bs=1,\n        height=320,\n        width=512,\n        unconditional_guidance_scale=7.5,\n        ddim_steps=50,\n        ddim_eta=1.0,\n        prompt_dir=prompt_dir,\n        text_input=True,\n        video_length=16,\n        frame_stride=FS,\n        timestep_spacing='uniform_trailing',\n        guidance_rescale=0.7,\n        perframe_ae=True,\n        interp=True\n    )\n    args = parser.parse_args(args=[], namespace=namespace)\n    seed_everything(args.seed)\n    rank, gpu_num = 0, 1\n    run_inference(args, gpu_num, rank)\n    os.chdir(old_dir)\n\n\nif __name__ == \"__main__\":\n    run()\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[project]\nname = \"comfyui-tooncrafter\"\ndescription = \"This project is used to enable [a/ToonCrafter](https://github.com/ToonCrafter/ToonCrafter) to be used in ComfyUI.\\nYou can use it to achieve generative keyframe animation\\nAnd use it in Blender for animation rendering and prediction\"\nversion = \"1.0.0\"\nlicense = \"LICENSE\"\ndependencies = [\"imageio==2.9.0\", \"numpy\", \"omegaconf==2.1.1\", \"opencv_python\", \"pandas\", \"pytorch_lightning==1.9.3\", \"pyyaml\", \"setuptools\", \"moviepy\", \"av\", \"xformers\", \"timm\", \"scikit-learn\", \"open_clip_torch==2.22.0\", \"decord==0.6.0\"]\n\n[project.urls]\nRepository = \"https://github.com/AIGODLIKE/ComfyUI-ToonCrafter\"\n#  Used by Comfy Registry https://comfyregistry.org\n\n[tool.comfy]\nPublisherId = \"\"\nDisplayName = \"ComfyUI-ToonCrafter\"\nIcon = \"\"\n"
  },
  {
    "path": "readme.md",
    "content": "# Introduction\nThis project is used to enable [ToonCrafter](https://github.com/ToonCrafter/ToonCrafter) to be used in ComfyUI.\n\nYou can use it to achieve generative keyframe animation(RTX 4090,26s)\n\nhttps://github.com/AIGODLIKE/ComfyUI-ToonCrafter/assets/116185401/68edb789-5a8e-418f-ae35-e3cfe6ab1300\n\nhttps://github.com/AIGODLIKE/ComfyUI-ToonCrafter/assets/116185401/86553c22-9395-4b0a-9d8d-0c29c7467bd3\n\nAnd use it in Blender for animation rendering and prediction\n\n**Additionally, it can be used completely without a network**\n\n## Installation\n1. ComfyUI Custom Node\n   ```bash\n   cd ComfyUI/custom_nodes\n   git clone https://github.com/AIGODLIKE/ComfyUI-ToonCrafter\n   cd ComfyUI-ToonCrafter\n   # install dependencies\n   ..\\..\\..\\python_embeded\\python.exe -m pip install -r requirements.txt\n   ```\n2. Model Prepare\n   - Download the weights:\n\n     - [512 full weights](https://github.com/ToonCrafter/ToonCrafter?tab=readme-ov-file#-models) *High VRAM usage, fp16 reccomended*\n     \n     - [512 fp16 weights](https://huggingface.co/Kijai/DynamiCrafter_pruned/resolve/main/tooncrafter_512_interp-fp16.safetensors)\n\n\n   - Put it in into `ComfyUI-ToonCrafter\\ToonCrafter\\checkpoints\\tooncrafter_512_interp_v1` for example 512x512.\n3. Enjoy it!\n\n## Showcases\n\n### Blender\n\nYou can even use it directly in Blender!([ComfyUI-BlenderAI-node](https://github.com/AIGODLIKE/ComfyUI-BlenderAI-node))\n\nhttps://github.com/AIGODLIKE/ComfyUI-ToonCrafter/assets/116185401/ca8ec681-b5bc-40a1-b12a-ad185acff477\n\n<table class=\"center\">\n    <tr style=\"font-weight: bolder;text-align:center;\">\n        <td>Input starting frame</td>\n        <td>Input ending frame</td>\n        <td>Generated video</td>\n    </tr>\n  <tr>\n  <td>\n    <img src=https://github.com/AIGODLIKE/ComfyUI-ToonCrafter/assets/116185401/1f4a4fe6-52ff-45f8-9a88-277a4eee9c8c width=\"250\">\n  </td>\n  <td>\n    <img src=https://github.com/AIGODLIKE/ComfyUI-ToonCrafter/assets/116185401/cf7c1d18-33a4-45e6-bc9a-9f7dc53b0547 width=\"250\">\n  </td>\n  <td>\n    <img src=https://github.com/AIGODLIKE/ComfyUI-ToonCrafter/assets/116185401/9a10f89b-e515-44db-869d-1769ae7d9677 width=\"250\">\n  </td>\n  </tr>\n</table>\n"
  },
  {
    "path": "requirements.txt",
    "content": "imageio==2.9.0\nnumpy\nomegaconf==2.1.1\nopencv_python\npandas\npytorch_lightning==1.9.3\npyyaml\nsetuptools\nmoviepy\nav\nxformers\ntimm\nscikit-learn\nopen_clip_torch==2.22.0\ndecord==0.6.0\n"
  }
]