main d125a2a5ef50 cached
31 files
908.4 KB
272.1k tokens
154 symbols
1 requests
Download .txt
Showing preview only (936K chars total). Download the full file or copy to clipboard to get everything.
Repository: nosiu/comfyui-instantId-faceswap
Branch: main
Commit: d125a2a5ef50
Files: 31
Total size: 908.4 KB

Directory structure:
gitextract_s796ovar/

├── .github/
│   └── workflows/
│       └── publish_action.yml
├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── ip_adapter/
│   ├── instantId.py
│   └── resampler.py
├── nodes.py
├── pyproject.toml
├── requirements.txt
├── ui/
│   ├── dialogs.js
│   ├── extension.js
│   ├── helpers.js
│   ├── shaders.js
│   └── uiHelpers.js
├── utils.py
└── workflows/
    ├── auto_rotate.json
    ├── draw_kps.json
    ├── draw_kps_rotate.json
    ├── inpaint.json
    ├── promp2image.json
    ├── promp2image_detail_pass.json
    ├── prompts2img_2faces_enhancement.json
    ├── prop2image_latent_upscale.json
    ├── prop2image_latent_upscale_with_2d_randomizer.json
    ├── prop2image_latent_upscale_with_3d_and_2d_randomizer.json
    ├── prop2image_latent_upscale_with_3d_and_2d_randomizer_with_rotation.json
    ├── simple.json
    ├── simple_two_embeds.json
    ├── simple_with_adapter.json
    └── very_simple.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/publish_action.yml
================================================
name: Publish to Comfy registry
on:
  workflow_dispatch:
  push:
    branches:
      - main
    paths:
      - "pyproject.toml"

jobs:
  publish-node:
    name: Publish Custom Node to registry
    runs-on: ubuntu-latest
    steps:
      - name: Check out code
        uses: actions/checkout@v4
      - name: Publish Custom Node
        uses: Comfy-Org/publish-node-action@main
        with:
          personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# ComfyUI InstantID FaceSwap v0.1.1
<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-guide) | [Custom nodes](#custom-nodes) | [Workflows](#workflows) | [Tips](#tips) | [Changelog](#changelog)</sub>

Implementation of [faceswap](https://github.com/nosiu/InstantID-faceswap/tree/main) based on [InstantID](https://github.com/InstantID/InstantID) for ComfyUI. \
Since version 0.1.0 it also allows generating people based on text.
</br>
**Works ONLY with SDXL checkpoints**
</br>
</br>
![image](https://github.com/user-attachments/assets/0c97dccf-ac8a-43f7-b50b-8bbf7ed81049)

![image](https://github.com/user-attachments/assets/bbc88aaf-fba4-43f1-80ea-fece379308db)



## Installation guide
<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-guide) | [Custom nodes](#custom-nodes) | [Workflows](#workflows) | [Tips](#tips) | [Changelog](#changelog)</sub>

1. Clone or download this repository and put it into **ComfyUI/custom_nodes**
2. Open commandline in the  **ComfyUI/custom_nodes/comfyui-instantId-faceswap/** folder and type `pip install -r requirements.txt` to install dependencies
3. Manually download required files and create required folders:
    - [antelopev2 models](https://huggingface.co/DIAMONIK7777/antelopev2/tree/main)
      and put them into **ComfyUI/models/insightface/models/antelopev2** folder
       -  1k3d68.onnx
       -  2d106det.onnx
       -  genderage.onnx
       -  glintr100.onnx
       -  scrfd_10g_bnkps.onnx

    - [IpAdapter and ControlNet](https://huggingface.co/InstantX/InstantID/tree/main)
       - ip-adapter.bin - put it into **ComfyUI/models/ipadapter**
       - ControlNetModel/diffusion_pytorch_model.safetensors and ControlNetModel/config.json  - put those files in new folder in  **ComfyUI/models/controlnet**

Newly added files hierarchy should look like this:
```
ComfyUI
\---models
    \---ipadapter
           ipadapter.bin
    \---controlnet
        \--- FOLDER_YOU_CREATED
              config.json
              diffusion_pytorch_model.safetensors
    \---insightface
        \---models
            \antelopev2
                  1k3d68.onnx
                  2d106det.onnx
                  genderage.onnx
                  glintr100.onnx
                  scrfd_10g_bnkps.onnx
```

*Note You don't need to add the 'ipadapter', and 'controlnet' folders to this specific location if you already have them somewhere else (also you can rename ipadapter.bin and ControlNetModel to something of your liking).
Instead, You can edit `ComfyUI/extra_model_paths.yaml` and add folders containing those files to the config.

## Custom nodes
<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-guide) | [Custom nodes](#custom-nodes) | [Workflows](#workflows) | [Tips](#tips) | [Changelog](#changelog)</sub>

- ### Load Insightface:
   Loads Insightface. Models need to be in a specific location. Check the  [Installation guide](#installation-guide) for details.

- ### Load instantId adapter:
   Loads the InstantId adapter and resampler. The model needs to be in a specific location. Check the [Installation guide](#installation-guide) for details. The resampler is used to prepare face embeds for ControlNet and the adapter.

- ### Apply instantId adapter:
   Applies the InstantId adapter to the model. This is optional—you can achieve good results without using this node.

   **Params:**
   - **checkpoint** - SDXL checkpoint
   - **instantId_adapter** - intantId adapter
   - **face_conditioning** - face conditioning prepared by the resampler
   - **strength** - strength of the instantId adapter

- ### Apply instantId ControlNet:
   Applies InstantId ControlNet.

   **Params:**
   - **positive**  - positive prompts
   - **negative**  - negative prompts
   - **face_conditioning** - face conditioning prepared by the resampler
   - **control_net** - instantId Controlnet
   - **strength** - strength of instantId ControlNet

- ### Apply instantId and ControlNet:
    A subgraph node that bundles several operations into a single node for convenience. It includes the following nodes: LoadInstantIdAdapter, FaceEmbedCombine, ControlNetLoader, InstantIdAdapterApply, and ControlNetInstantIdApply.

    This node streamlines the process by loading the InstantId adapter, combining face embeddings, loading the ControlNet, and applying both the InstantId adapter and ControlNet in one step.

- ### FaceEmbed for instantId
   Prepares face embeds for generation. You can chain multiple face embeds.

   **Params:**
   - **insightface** - insightface
   - **face_image** - input image from which to extract embed data
   - **face_embeds** (*optional*) - additional face embed(s)

- ### FaceEmbed Combine
   Prepares face embeds for ControlNet and the adapter.

   **Params:**
   - **resampler** - resampler
   - **face_embeds** - face_embeds

- ### Get Angle from face
   Returns the angle (in degrees) by which the image must be rotated counterclockwise to align the face. Since there can be more than one face in the image, face search is performed only in the area of the drawn mask, enlarged by the pad parameter.

   **Note:** If the face is rotated by an extreme angle, insightface won't be able to find the correct position of face keypoints, so the rotation angle might not always be accurate. In these cases, manually draw your own KPS.


   **Params:**
   - **insightface** - insightface
   - **image** - image with the face to rotate
   - **mask** - mask
   - **rotate_mode** - available options:
      - *none* - returns 0
      - *loseless* - returns the closest angle to 90, 180, 270 degrees
      - *any* - returns a specific angle by which the image should be rotated
   - **pad_top** - how many pixels to enlarge the mask upwards
   - **pad_right** - how many pixels to enlarge the mask to the right
   - **pad_bottom** - how many pixels to enlarge the mask downwards
   - **pad_left**  -  how many pixels to enlarge the mask to the left


- ### Get Angle from KPS data
   Returns the angle (in degrees) by which the image must be rotated counterclockwise to align the face.

   **Params:**
   - **rotate_mode** - available options:
      - *none* - returns 0
      - *loseless* - returns the closest angle to 90, 180, 270 degrees
      - *any* - returns a specific angle by which the image should be rotated


- ### Rotate Image
   Rotates the image by the given angle and expands it.

   **Params:**
  - **image** - image
  - **angle** - angle
  - **counter_clockwise** - direction

- ### Remove rotation padding
   Removes the expanded region added by two rotations (first to align the face, and second to return to the original position).

   **Params:**
  - **original_image** - image before rotation
  - **rotated_image** - rotated image

- ### Draw KPS
   Allows you to draw your own keypoints (KPS), useful when you get the error `"No face detected in pose image"` or when using InstantId to generate images from prompts only. Click and drag the KPS to move them around.

   When you place your KPS in the desired position, this node will show the angle by which the image should be rotated to align the face.

   You can adjust the opacity of each keypoint to sacrifice likeness, for example, when adding "glasses".

   **Shortcuts:**\
      **CTRL + DRAG** - move around\
      **CTRL + WHEEL** - zoom in / out\
      **ALT + WHEEL** - decrease / increase distance of other points from blue point (nose kps)

   **Params:**
   - **image_reference** (optional) - an image that serves as a background to more accurately match the appropriate points. If provided, the resulting image will have the width and height of this image.
   - **width** - width of the image (disabled if `image_reference` is provided)
   - **height** - height of the image (disabled if `image_reference` is provided)

- ### 3d KPS from image
   Allows you to extract 3D keypoints (KPS) of a face from an image. This is useful when generating content from text prompts and wanting to rotate the face while preserving the distance between the eyes, nose, and mouth.

   To use it, connect the `image` node, then click the **"Get KPS From Image"** button. Afterward, you can adjust the scale, position, and rotation of the face by clicking the **"Change KPS"** button.

   **IMPORTANT:** 
      - Clicking Get KPS From Image will run InsightFace to extract KPS data, so it’s best not to use this in the middle of the generation process (this depends on your system's performance).
      - You cannot manually change the distance between KPS in this node.

   Once your KPS are placed in the desired position, the node will show the angle by which the image should be rotated to align the face.

   You can adjust the opacity of each keypoint to sacrifice likeness, for example, when adding "glasses".

   **Shortcuts:**\
      **CTRL + DRAG** - move around\
      **CTRL + WHEEL** - zoom in / out\
      **ALT + WHEEL** - scale KPS

   **Params:**
   - **image** - an image of face from which KPS will be calculated
   - **width** - width of the image
   - **height** - height of the image


- ### Preprocess image for instantId:
   Cuts out the mask area wrapped in a square, enlarges it in each direction by the `pad` parameter, and resizes it (to dimensions rounded down to multiples of 8). It also creates a control image for InstantId ControlNet.

   **Note:** If the face is rotated by an extreme angle, the prepared `control_image` may be drawn incorrectly.

   If the `insightface` param is not provided, it will not create a control image, and you can use this node as a regular node for inpainting (to cut the masked region with padding and later compose it).

   **Params:**
   - **image** - your pose image (the image in which the face will be swapped)
   - **mask** - drawn mask (the area to be changed must contain the face; you can also mask other features like hair or hats and change them later with prompts)
   - **insightface** (optional) - loaded insightface
   - **width** - width of the image in pixels (check `resize_mode`)
   - **height** - height of the image in pixels, check `resize_mode`
   - **resize_mode** - available options:
      - *auto* - automatically calculates the image size so that the area is `width` x ` height`.
         For SDXL, you probably want to use this option with:
         **width: 1024, height: 1024**
      - *scale by width* -  ignores provided `height` and calculates it based on the aspect ratio
      - *scale by height* - ignores provided `width` and calculates it based on the aspect ratio
      - *free* - uses the provided `width` and `height`
   - **pad** - how many pixels to enlarge the mask in each direction

- ### Preprocess image for instantId (Advanced):
   Same as **Preprocess Image for InstantId** with five additional parameters.

   **Params:**
   - **upscale_method**  - *nearest-exact*, *bilinear*, *area*, *bicubic*, *lanczos*
   - **pad_top** - how many pixels to enlarge the mask upwards
   - **pad_right** - how many pixels to enlarge the mask to the right
   - **pad_bottom** - how many pixels to enlarge the mask downwards
   - **pad_left**  -  how many pixels to enlarge the mask to the left


- ### Randomize 2d KPS
   Randomizes the position, angle, and rotation of the KPS based on the provided parameters.

   **Params:**
   - **angle_min** - minimum rotation angle. The rotation point is the center of the KPS,
   - **angle_max** - maximum rotation angle, the rotation point is the center of the KPS,
   - **scale_min** - minimum scaling value relative to the KPS center (1 means no scaling),
   - **scale_max** - maximum scaling value relative to the KPS center (1 means no scaling),
   - **translate_x** - value by which to shift the KPS along the X axis. For example, setting it to 200 will randomly select a shift value between -200 and 200,
   - **translate_y** - value by which to shift the KPS along the Y axis. For example, setting it to 200 will randomly select a shift value between -200 and 200,
   - **border** - edge threshold; if the KPS gets too close to the image border during translation, rotation, or scaling, it will be repositioned to this value


- ### Randomize 3d KPS
   Randomizes the rotation of the KPS around three axes. Setting any of the parameters will randomly select a rotation angle around the corresponding axis. The rotation point is the center of the KPS.
   Example: Setting rotate_x to 20 will rotate the KPS by a random angle between -20 and 20 degrees.

   **Params:**
   - **rotate_x** - rotation angle around the X axis,
   - **rotaet_y** - rotation angle around the Y axis,
   - **rotate_z** - rotation angle around the Z axis,


- ### Scale 2d KPS by
   Scales the KPS data by a given factor.

   **Params:**
   - **scale**: scaling factor


- ### Scale 2d KPS
   Scales the KPS data to the specified width and height.

   **Params:**
   - **width**: desired width,
   - **height**: desired height


- ### Rotate 2d KPS
   Rotates the KPS by the given angle and expands it.

   **Params:**
   - **angle**: rotation angle


 - ### Crop 2d KPS
   Crops the KPS.

   **Params:**
   - **x**: X coordinate of the top-left corner,
   - **y**: Y coordinate of the top-left corner,
   - **width**:  width of the cropped area,
   - **height**: height of the cropped area,


 - ### Create KPS Image
   Creates a control_image from `kps_data`


 - ### Create mask from Kps
   Creates a mask based on the KPS position.

   **Params:**
   - **grow_by** - expands the mask by adding extra space on all sides. The additional margin on each side is equal to the mask’s dimension divided by this value. For example, if the KPS width is 20 pixels and grow_by is set to 10, an extra 20/10 (i.e., 2 pixels) will be added to the left and right sides — resulting in a new width of 20 + 2 + 2 = 24 pixels. The same applies to the height.

## Workflows
<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-guide) | [Custom nodes](#custom-nodes) | [Workflows](#workflows) | [Tips](#tips) | [Changelog](#changelog)</sub>


You can find example workflows in the `/workflows` folder.
Nodes colors legend: \
**yellow** - node from this extension,\
**blue** - inputs, load your controlnets, models, images ...\
**purple** - you might want to configure those\
**cyan** - output images\
**green** - positive prompts\
**red** - negative prompts

If you set the mask blur options remember that it will shrink the area you masked

### simple.json
Face swap: Set your pose image, draw a mask, set your face reference (the face that will replace the masked area in the pose image), and that's it.

### simple_with_adapter.json
Same as `simple.json` with an additional node `Apply instantId adapter`

### simple_two_embeds.json
Same as `simple.json`, but allows you to provide two face references. You can use this to merge two different faces or just provide a second reference for the first face.

### draw_kps.json
Face swap: Set your pose image, draw a mask, set your face reference (the face that will replace the masked area in the pose image), and then click the "draw KPS" button on the `Draw KPS` node to set your KPS.

<details>
  <summary>View Example Image</summary
                                 
  ![DRAW KPS](https://github.com/user-attachments/assets/9c87fa80-bb51-4df5-aca8-8cfee3d1668b)
</details>

### draw_kps_rotate.json
Same as `draw_kps.json`, but it will also rotate the pose image. After setting your KPS, you should set the angle by which you want to rotate the image to align the face properly.

<details>
  <summary>View Example Image</summary
                                 
  ![KPS SET ANGLE](https://github.com/user-attachments/assets/665dbfdd-79ce-47a0-9004-40c0cf48596c)
</details>

### auto_rotate.json
Same as `simple_with_adapter.json`, but it will automatically detect the angle of rotation based on the mask and padding set in the `Get Angle from Face` node.

### promp2image.json
Generates an image based only on the face reference and prompts. Set your face reference, draw the KPS where the face should be drawn, and add prompts like "man sitting in the park."

### promp2image_detail_pass.json
Same as `prompt2image.json`, but this one expects the KPS you draw to be small, so the face is not detailed (or may even be deformed). Second pass should fix the face.


### prompts2img_2faces_enhancement.json
A workflow that generates two faces in one image and enhances them one by one.
Set your face references and KPS for one image, then set a second KPS in another region of the picture. Good results depend on your prompts.

<details>
  <summary>View Example Image</summary>
    
  ![Two KPS one flow](https://github.com/user-attachments/assets/fbaa38df-3400-401d-b644-087723e6488c)
</details>

### inpaint.json
Since you can use the `Preprocess Image for InstantId` and `Preprocess Image for InstantId (Advanced)` nodes to resize your images with a mask, this workflow is useful for inpainting in general. This workflow shows you how to do it.

<details>
  <summary>View Example Image</summary>
    
  ![basic inpaint](https://github.com/user-attachments/assets/bda258b1-a988-47f5-beb6-105289c990ac)

</details>

### prop2image_latent_upscale.json
Similar to `promp2image_detail_pass.json`, this workflow allows you to draw your KPS. The workflow will run a first pass at 50%, upscale the latent by `1.4`, and finish with a detail pass on the face area.

### prop2image_latent_upscale_with_2d_randomizer.json
Same as `prop2image_latent_upscale.json`, but it will randomize the position of the face within the image.

### prop2image_latent_upscale_with_3d_and_2d_randomizer.json
Similar to `prop2image_latent_upscale_with_2d_randomizer.json`, but instead of drawing your KPS, you retrieve them from a face image (in 3d).
Click the **"Get KPS from image"** button on the 3D KPS from Image node, then use the **"Change KPS"** button to adjust the position of your KPS. You can also randomize the 3D rotation in this workflow.

### prop2image_latent_upscale_with_3d_and_2d_randomizer_with_rotation.json
Exactly the same as `prop2image_latent_upscale_with_3d_and_2d_randomizer.json`, with one small addition: you can set the face rotation method during the last pass. The workflow will rotate the face to a "straight" position, process the image, and then composite it into the final result.

The default rotation is set to "any," so you might encounter some artifacts from the rotation. You can adjust this setting as needed.

## Tips
<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-guide) | [Custom nodes](#custom-nodes) | [Workflows](#workflows) | [Tips](#tips) | [Changelog](#changelog)</sub>

- Most workflows require you to draw a mask on the pose image.
- If you encounter the error `No face detected in pose image`, try drawing a larger mask or increasing the `pad` parameter or draw KPS yourself.
- You can adjust the opacity of each keypoint to preserve original features or rely more on your prompts without sacrificing the overall likeness of the face.
- You can modify more than just the face — add accessories like a hat, change hair, or even alter expressions.
- If you're changing a lot of elements unrelated to the face, it's a good idea to add a second pass focused primarily on the face area to enhance detail.
- To improve results, you can integrate other extensions such as ControlNet for inpainting, Fooocus inpaint, FaceShaper, Expression Lora, and many more.
- To understand the relationship between ControlNet and the adapter, check the official paper linked in the instantId repository: https://github.com/instantX-research/InstantID?tab=readme-ov-file

## Changelog
<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-guide) | [Custom nodes](#custom-nodes) | [Workflows](#workflows) | [Tips](#tips) | [Changelog](#changelog)</sub>

- ### 0.1.1 (03.03.2025)
   - Introduced new nodes to ensure proper KPS size.
   - Removed the ability to draw masks on the KPS node (a separate node is available for this).
   - Added the ability to add transparency to individual KPS, providing similar functionality as `ControlNet Scale` but for specific parts of the face (e.g., adding glasses).
   - The generation of KPS control images has been entirely moved to the backend (Python). You can still draw KPS manually, but this change reduces the creation of temporary files.
   - Drawn KPS positions are now saved into the workflow.
   - Added the ability to import 3D KPS positions from the face, allowing rotation, scaling, and movement of those points while preserving the distance between the eyes, nose, and mouth.
   - Added options to randomize position, rotation, and KPS scaling to diversify final images.
   - Added the ability to randomize 3D face rotation to further diversify results.
   - To diversify results even further, you can use tools like [comfyui-text-randomizer](https://github.com/nosiu/comfyui-text-randomizer), which was created as a side project during the development of this repository.

   **Note:** Some old workflows will not be compatible with this version.


- ### 0.1.0 (20.10.2024)
   - The code was rewritten from scratch and now uses the ComfyUI backend. This allows you to chain LORAs or ControlNets as needed, providing greater control over the entire process.
For example, you can now draw your own KPS, enabling both text-to-image and image-to-image generation.
   - Removed most dependencies (including Diffusers).
   - Removed all old nodes and introduced new ones.
   - The script that automatically generated workflows based on all faces in a specific catalog has been removed.

   **Note:** Old workflows will not work with this version.


- ### 0.0.5 (25.02.2024)
   - The `mask_strength` parameter has been fixed; it now functions correctly. Previously, it was stuck at *0.9999* regardless of the chosen value.
   - The `ip_adapter_scale` parameter has been fixed. If you were using the xformers, this parameter could be stuck at *50*.
   - Changed the method of processing face_embed(s).
   - Added the `rotate_face` parameter. It will attempt to rotate the image to keep the face straight before processing and rotate it back to the original position afterward.

- ### 0.0.4 (14.02.2024)
   - To save memory, you can run Comfy with the `--fp16-vae` argument to disable the default VAE upcasting to float32.
   - Merged the old `resize` and `resize_to` options into just `resize` for the Faceswap generate node. To emulate the old behavior where resize was unchecked, select `don't`.
   - Added a manual offload mechanism to save GPU memory.
   - Changed the minimum and maximum values for `mask_strength` to range from 0.00 to 1.00.
- ### 0.0.3 (07.02.2024)
   - Fixed an error that caused new face_embeds to be added when editing previous ones
- ### 0.0.2 (05.02.2024)
   - Introducing workflow generator script - [more information here](#workflow-script-beta)
   - Updating the dependency diffusers to version 0.26.x. Run either:
   ```bash
   pip install -r requirements.txt
   ```
   or
   ```bash
   pip install -U diffusers~=0.26.0
   ```

- ### 0.0.1 (01.02.2024)
  - Progress bar and latent preview added for generation node



================================================
FILE: __init__.py
================================================
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
WEB_DIRECTORY = './ui/'

__all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS', 'WEB_DIRECTORY']

================================================
FILE: ip_adapter/instantId.py
================================================
import torch
from comfy.ldm.modules.attention import optimized_attention

class InstantId(torch.nn.Module):
  def __init__(self, ip_adapter):
    super().__init__()

    self.to_kvs = torch.nn.ModuleDict()

    for key, value in ip_adapter.items():
      k = key.replace(".weight", "").replace(".", "_")
      self.to_kvs[k] = torch.nn.Linear(value.shape[1], value.shape[0], bias=False)
      self.to_kvs[k].weight.data = value


# based on https://github.com/laksjdjf/IPAdapter-ComfyUI/blob/main/ip_adapter.py#L256
class CrossAttentionPatch:
  def __init__(self, scale, instantId, cond, number):
    self.scales = [scale]
    self.instantIds = [instantId]
    self.conds = [cond]
    self.number = number

  def set_new_condition(self, scale, instantId, cond, number):
    self.scales.append(scale)
    self.instantIds.append(instantId)
    self.conds.append(cond)
    self.number = number

  def __call__(self, q, k, v, extra_options):
    dtype = torch.float16
    hidden_states = optimized_attention(q, k, v, extra_options["n_heads"])
    for scale, cond, instantId in zip(self.scales, self.conds,  self.instantIds):
      k_cond = instantId.to_kvs[str(self.number*2+1) + "_to_k_ip"](cond).to(dtype=dtype)
      v_cond = instantId.to_kvs[str(self.number*2+1) + "_to_v_ip"](cond).to(dtype=dtype)
      ip_hidden_states = optimized_attention(q, k_cond, v_cond, extra_options["n_heads"])
      hidden_states = hidden_states + ip_hidden_states * scale
      return hidden_states.to(dtype=dtype)

================================================
FILE: ip_adapter/resampler.py
================================================
# modified from https://github.com/mlfoundations/open_flamingo/blob/main/open_flamingo/src/helpers.py
import math

import torch
import torch.nn as nn

# FFN
def FeedForward(dim, mult=4):
  inner_dim = int(dim * mult)
  return nn.Sequential(
    nn.LayerNorm(dim),
    nn.Linear(dim, inner_dim, bias=False),
    nn.GELU(),
    nn.Linear(inner_dim, dim, bias=False),
  )

def reshape_tensor(x, heads):
  bs, length, _ = x.shape
  #(bs, length, width) --> (bs, length, n_heads, dim_per_head)
  x = x.view(bs, length, heads, -1)
  # (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head)
  x = x.transpose(1, 2)
  # (bs, n_heads, length, dim_per_head) --> (bs*n_heads, length, dim_per_head)
  x = x.reshape(bs, heads, length, -1)
  return x


class PerceiverAttention(nn.Module):
  def __init__(self, *, dim, dim_head=64, heads=8):
    super().__init__()
    self.scale = dim_head**-0.5
    self.dim_head = dim_head
    self.heads = heads
    inner_dim = dim_head * heads

    self.norm1 = nn.LayerNorm(dim)
    self.norm2 = nn.LayerNorm(dim)

    self.to_q = nn.Linear(dim, inner_dim, bias=False)
    self.to_kv = nn.Linear(dim, inner_dim * 2, bias=False)
    self.to_out = nn.Linear(inner_dim, dim, bias=False)

  def forward(self, x, latents):
    """
    Args:
        x (torch.Tensor): image features
            shape (b, n1, D)
        latent (torch.Tensor): latent features
            shape (b, n2, D)
    """
    x = self.norm1(x)
    latents = self.norm2(latents)

    b, l, _ = latents.shape

    q = self.to_q(latents)
    kv_input = torch.cat((x, latents), dim=-2)
    k, v = self.to_kv(kv_input).chunk(2, dim=-1)

    q = reshape_tensor(q, self.heads)
    k = reshape_tensor(k, self.heads)
    v = reshape_tensor(v, self.heads)

    # attention
    scale = 1 / math.sqrt(math.sqrt(self.dim_head))
    weight = (q * scale) @ (k * scale).transpose(-2, -1) # More stable with f16 than dividing afterwards
    weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype)
    out = weight @ v

    out = out.permute(0, 2, 1, 3).reshape(b, l, -1)

    return self.to_out(out)


class Resampler(nn.Module):
  def __init__(
    self,
    dim=1024,
    depth=8,
    dim_head=64,
    heads=16,
    num_queries=8,
    embedding_dim=768,
    output_dim=1024,
    ff_mult=4,
  ):
    super().__init__()

    self.latents = nn.Parameter(torch.randn(1, num_queries, dim) / dim**0.5)

    self.proj_in = nn.Linear(embedding_dim, dim)

    self.proj_out = nn.Linear(dim, output_dim)
    self.norm_out = nn.LayerNorm(output_dim)

    self.layers = nn.ModuleList([])
    for _ in range(depth):
      self.layers.append(
        nn.ModuleList(
          [
            PerceiverAttention(dim=dim, dim_head=dim_head, heads=heads),
            FeedForward(dim=dim, mult=ff_mult),
          ]
        )
      )

  def forward(self, x):

    latents = self.latents.repeat(x.size(0), 1, 1)

    x = self.proj_in(x)

    for attn, ff in self.layers:
        latents = attn(x, latents) + latents
        latents = ff(latents) + latents

    latents = self.proj_out(latents)
    return self.norm_out(latents)

================================================
FILE: nodes.py
================================================
import os
import cv2
import torch
import numpy as np
import json
import comfy.utils
import folder_paths
from urllib.parse import urlparse, parse_qs
from server import PromptServer
from aiohttp import web
from comfy_execution.graph_utils import GraphBuilder
from .ip_adapter.resampler import Resampler
from .ip_adapter.instantId import InstantId
from insightface.app import FaceAnalysis
from .utils import draw_kps, set_model_patch_replace, resize_to_fit_area, \
  kps_rotate_2d, kps_rotate_3d, kps3d_to_kps2d, calculate_size_after_rotation, \
  get_mask_bbox_with_padding,get_kps_from_image,get_angle, image_rotate_with_pad, \
  get_bbox_from_kps


folder_paths.folder_names_and_paths["ipadapter"] = ([os.path.join(folder_paths.models_dir, "ipadapter")], folder_paths.supported_pt_extensions)
INSIGHTFACE_PATH = os.path.join(folder_paths.models_dir, "insightface")
CATEGORY_NAME = "InstantId Faceswap"
MAX_RESOLUTION = 16384


#==============================================================================
# get key points and landmarks position as 3d points and send it back to frontend when requested
routes = PromptServer.instance.routes
@routes.post('/get_keypoints_for_instantId')
async def proxy_handle(request):
  post = await request.json()

  try:
    app = FaceAnalysis(
      name="antelopev2",
      root=INSIGHTFACE_PATH,
      providers=["CPUExecutionProvider", "CUDAExecutionProvider"]
    )

    app.prepare(ctx_id=0, det_size=(640, 640))
    parsed_url = urlparse(post['image'])
    queries = parse_qs(parsed_url.query)
    path = os.path.join(folder_paths.get_directory_by_type(queries['type'][0]), queries['filename'][0])
    image = cv2.imread(path)
    faces = app.get(cv2.cvtColor(image, cv2.COLOR_RGB2BGR))
    if len(faces) == 0:
      raise Exception("No face detected")

    face = sorted(faces, key=lambda x: (x["bbox"][2] - x["bbox"][0]) * (x["bbox"][3] - x["bbox"][1]))[-1] # only use the maximum face

    landmarks_3d = face.landmark_3d_68
    # KPS
    left_eye = np.mean(landmarks_3d[36:42], axis=0)
    right_eye = np.mean(landmarks_3d[42:48], axis=0)
    nose_tip = np.array([landmarks_3d[33][0], landmarks_3d[33][1], landmarks_3d[30][2]])
    left_mouth = landmarks_3d[48]
    right_mouth = landmarks_3d[54]
    
    return web.json_response({
      "status": "ok",
      "data": {
        "kps": [ left_eye.tolist(), right_eye.tolist(), nose_tip.tolist(), left_mouth.tolist(), right_mouth.tolist() ],
        "jawline": landmarks_3d[0:17].tolist(), 
        "eyebrow_left": landmarks_3d[17:22].tolist(),
        "eyebrow_right": landmarks_3d[22:27].tolist(),
        "nose_bridge": landmarks_3d[27:31].tolist(), 
        "nose_lower": landmarks_3d[31:36].tolist(), 
        "eye_left": landmarks_3d[36:42].tolist(),
        "eye_right": landmarks_3d[42:48].tolist(),
        "mouth_outer": landmarks_3d[48:60].tolist(),
        "mouth_inner": landmarks_3d[60:68].tolist(),
        }
    })
  except Exception as error:
      if str(error) == "No face detected":
        return web.json_response({
          "error": "No face detected"
        })
      else:
        raise error


#==============================================================================
class FaceEmbed:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "insightface":  ("INSIGHTFACE_APP",),
        "face_image":  ("IMAGE",)
      },
      "optional": {
        "face_embeds": ("FACE_EMBED",)
      }
    }

  RETURN_TYPES = ("FACE_EMBED",)
  RETURN_NAMES = ("face embeds",)
  FUNCTION = "make_face_embed"
  CATEGORY = CATEGORY_NAME

  def make_face_embed(self, insightface, face_image, face_embeds = None):
    face_image = (255.0 * face_image.cpu().numpy().squeeze()).clip(0, 255).astype(np.uint8)
    face_info = insightface.get(cv2.cvtColor(face_image, cv2.COLOR_RGB2BGR))

    assert len(face_info) > 0, "No face detected for face embed"

    face_info = sorted(face_info, key=lambda x: (x["bbox"][2] - x["bbox"][0]) * (x["bbox"][3] - x["bbox"][1]))[-1] # only use the maximum face
    face_emb = torch.tensor(face_info["embedding"], dtype=torch.float32).unsqueeze(0)

    if face_embeds is None:
      return (face_emb,)

    face_embeds = torch.cat((face_embeds, face_emb), dim=-2)
    return (face_embeds,)


#==============================================================================
class FaceEmbedCombine:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "resampler":  ("RESAMPLER",),
        "face_embeds":  ("FACE_EMBED",)
      },
    }

  RETURN_TYPES = ("FACE_CONDITIONING",)
  RETURN_NAMES = ("face conditioning",)
  FUNCTION = "combine_face_embed"
  CATEGORY = CATEGORY_NAME

  def combine_face_embed(self, resampler, face_embeds):
    embeds = torch.mean(face_embeds, dim=0, dtype=torch.float32).unsqueeze(0)
    embeds = embeds.reshape([1, -1, 512])
    conditionings = resampler(embeds).to(comfy.model_management.get_torch_device())
    return (conditionings,)


#==============================================================================
class AngleFromFace:
  rotate_modes = ["none", "loseless", "any"]
  def __init__(self):
      pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "insightface": ("INSIGHTFACE_APP",),
        "image": ("IMAGE", { "tooltip": "Pose image." }),
        "mask": ("MASK",),
        "rotate_mode": (self.rotate_modes,),
        "pad_top": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "pad_right": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "pad_bottom": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "pad_left": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
      },
    }

  RETURN_TYPES = ("FLOAT",)
  RETURN_NAMES = ("angle",)
  FUNCTION = "get_angle"
  CATEGORY = CATEGORY_NAME

  def get_angle(
        self, insightface, image, mask, rotate_mode,
        pad_top, pad_right, pad_bottom, pad_left
    ):

    p_x1, p_y1, p_x2, p_y2 = get_mask_bbox_with_padding(mask.squeeze(0), pad_top, pad_right, pad_bottom, pad_left)
    image = image[:, p_y1:p_y2, p_x1:p_x2]
    kps = get_kps_from_image(image, insightface)

    angle = 0.
    if rotate_mode != "none" :
      angle = get_angle(
        kps[0], kps[1],
        round_angle = True if rotate_mode == "loseless" else False
      )
    return (angle,)


#==============================================================================
class AngleFromKps:
  rotate_modes = ["none", "loseless", "any"]
  def __init__(self):
      pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "rotate_mode": (self.rotate_modes,)
      },
    }

  RETURN_TYPES = ("FLOAT",)
  RETURN_NAMES = ("angle",)
  FUNCTION = "get_angle"
  CATEGORY = CATEGORY_NAME

  def get_angle(self, kps_data, rotate_mode):
    kps_data = json.loads(kps_data)
    angle = 0.
    if rotate_mode != "none" :
      angle = get_angle(
        kps_data['array'][0], kps_data['array'][1],
        round_angle = True if rotate_mode == "loseless" else False
      )
    return (angle,)


#==============================================================================
class ComposeRotated:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "original_image": ("IMAGE",),
        "rotated_image": ("IMAGE",),
      }
  }

  RETURN_TYPES = ("IMAGE",)
  RETURN_NAMES = ("image",)
  FUNCTION = "compose_rotate"
  CATEGORY = CATEGORY_NAME

  def compose_rotate(self, original_image, rotated_image):
    original_width, original_height = original_image.shape[2], original_image.shape[1]
    rotated_width, rotated_height = rotated_image.shape[2], rotated_image.shape[1]

    if rotated_width != original_width:
      pad_x1 = (rotated_width - original_width) // 2
      pad_x2 = pad_x1 * -1
    else:
      pad_x1 = 0
      pad_x2 = original_width

    if rotated_height != original_height:
      pad_y1 = (rotated_height - original_height) // 2
      pad_y2 = pad_y1 * -1
    else:
      pad_y1 = 0
      pad_y2 = original_height

    image = rotated_image[:, pad_y1:pad_y2, pad_x1:pad_x2, :]
    return (image,)


#==============================================================================
class RotateImage:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "image": ("IMAGE",),
        "angle": ("FLOAT", {"default": 0.0, "min": -360.0, "step": 0.1, "max": 360.0},),
        "counter_clockwise": ("BOOLEAN", {"default": True},),
      }
    }

  RETURN_TYPES = ("IMAGE",)
  RETURN_NAMES = ("rotated_image", "rotated_mask",)
  FUNCTION = "rotate_and_pad_image"
  CATEGORY = CATEGORY_NAME

  def rotate_and_pad_image(self, image, angle, counter_clockwise):
    if angle == 0 or angle == 360:
      return (image,)

    image = image_rotate_with_pad(image, counter_clockwise, angle)

    return (image,)
  

#==============================================================================
class LoadInstantIdAdapter:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "ipadapter":  (folder_paths.get_filename_list("ipadapter"), { "tooltip": "The default folder where the adapter is searched for is: models/ipadapter." }),
      }
  }

  RETURN_TYPES = ("INSTANTID_ADAPTER", "RESAMPLER", )
  RETURN_NAMES = ("InstantId_adapter", "resampler",)
  FUNCTION = "load_instantId_adapter"
  CATEGORY = CATEGORY_NAME

  def load_instantId_adapter(self, ipadapter):
    ipadapter_path = folder_paths.get_full_path("ipadapter", ipadapter)
    model = comfy.utils.load_torch_file(ipadapter_path, safe_load=True)
    instantId = InstantId(model['ip_adapter'])

    resampler = Resampler(
      dim=1280,
      depth=4,
      dim_head=64,
      heads=20,
      num_queries=16,
      embedding_dim=512,
      output_dim=2048,
      ff_mult=4
    )
    resampler.load_state_dict(model["image_proj"])
    return (instantId, resampler)


#==============================================================================
class InstantIdAdapterApply:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "model": ("MODEL", ),
        "instantId_adapter": ("INSTANTID_ADAPTER", ),
        "face_conditioning": ("FACE_CONDITIONING", ),
        "strength": ("FLOAT", {"default": 0.8, "min": 0, "step": 0.1, "max": 10},),
      }
    }

  RETURN_TYPES = ("MODEL",)
  RETURN_NAMES = ("model",)
  FUNCTION = "apply_instantId_adapter"
  CATEGORY = CATEGORY_NAME

  def apply_instantId_adapter(self, model, instantId_adapter, face_conditioning, strength):
    if strength == 0: return (model,)

    instantId = instantId_adapter.to(comfy.model_management.get_torch_device())
    patch_kwargs = {
      "instantId": instantId,
      "scale": strength,
      "cond": face_conditioning,
      "number": 0
    }

    m = model.clone()

    for id in [4,5,7,8]:
      block_indices = range(2) if id in [4, 5] else range(10)
      for index in block_indices:
        set_model_patch_replace(m, patch_kwargs, ("input", id, index))
        patch_kwargs["number"] += 1
      block_indices = range(2) if id in [3, 4, 5] else range(10)
      for index in block_indices:
        set_model_patch_replace(m, patch_kwargs, ("output", id, index))
        patch_kwargs["number"] += 1
    for index in range(10):
      set_model_patch_replace(m, patch_kwargs, ("middle", 1, index))
      patch_kwargs["number"] += 1

    return (m,)


#==============================================================================
# based on ControlNetApplyAdvance from ComfyUi/nodes.py
class ControlNetInstantIdApply:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "positive": ("CONDITIONING", ),
        "negative": ("CONDITIONING", ),
        "face_conditioning": ("FACE_CONDITIONING", ),
        "control_net": ("CONTROL_NET", ),
        "image": ("IMAGE", ),
        "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01})
      }
    }

  RETURN_TYPES = ("CONDITIONING", "CONDITIONING", )
  RETURN_NAMES = ("positive", "negative",)
  FUNCTION = "apply_controlnet"
  CATEGORY = CATEGORY_NAME

  def apply_controlnet(self, positive, negative, face_conditioning, control_net, image, strength):
    if strength == 0:
        return (positive, negative)

    control_hint = image.movedim(-1,1)
    cnets = {}

    out = []
    for conditioning, isPositive in zip([positive, negative], [True, False]):
      c = []
      for t in conditioning:
        d = t[1].copy()

        prev_cnet = d.get("control", None)
        if prev_cnet in cnets:
          c_net = cnets[prev_cnet]
        else:
          c_net = control_net.copy().set_cond_hint(control_hint, strength)
          c_net.set_previous_controlnet(prev_cnet)
          cnets[prev_cnet] = c_net

        if isPositive:
          d["cross_attn_controlnet"] = face_conditioning.to(comfy.model_management.intermediate_device())
        else :
          d["cross_attn_controlnet"] = torch.zeros_like(face_conditioning).to(comfy.model_management.intermediate_device())
        d["control"] = c_net
        d["control_apply_to_uncond"] = False

        n = [t[0], d]
        c.append(n)
      out.append(c)
    return (out[0], out[1],)


#==============================================================================
class InstantIdAndControlnetApply:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "model": ("MODEL", ),
        "ipadapter_path":  (folder_paths.get_filename_list("ipadapter"), { "tooltip": "The default folder where the adapter is searched for is: models/ipadapter." }),
        "control_net_name": (folder_paths.get_filename_list("controlnet"), ),
        "face_embed": ("FACE_EMBED", ),
        "control_image": ("IMAGE", ),
        "adapter_strength": ("FLOAT", {"default": 0.5, "min": 0, "step": 0.1, "max": 10},),
        "control_net_strength": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 10.0, "step": 0.01}),
        "positive": ("CONDITIONING", ),
        "negative": ("CONDITIONING", )
      }
    }

  RETURN_TYPES = ("MODEL", "CONDITIONING", "CONDITIONING",)
  RETURN_NAMES = ("model", "positive", "negative",)
  FUNCTION = "apply_instantId_adapter_and_controlnet"
  CATEGORY = CATEGORY_NAME

  def apply_instantId_adapter_and_controlnet(
      self, model, ipadapter_path, control_net_name, face_embed, control_image,
      adapter_strength, control_net_strength, positive, negative
  ):
    graph = GraphBuilder()
    loadInstantIdAdapter = graph.node(
      "LoadInstantIdAdapter", ipadapter=ipadapter_path
    )
    faceEmbedCombine = graph.node(
      "FaceEmbedCombine", resampler=loadInstantIdAdapter.out(1), face_embeds=face_embed
     )
    loadControlNet = graph.node(
      "ControlNetLoader", control_net_name = control_net_name
    )
    instantIdApply = graph.node(
      "InstantIdAdapterApply", model=model, instantId_adapter=loadInstantIdAdapter.out(0),
      face_conditioning=faceEmbedCombine.out(0), strength=adapter_strength
    )
    controlNetInstantIdApply = graph.node(
      "ControlNetInstantIdApply", positive=positive, negative=negative,
      face_conditioning=faceEmbedCombine.out(0), control_net=loadControlNet.out(0),
      image=control_image, strength=control_net_strength
    )

    return {
      "result": (instantIdApply.out(0), controlNetInstantIdApply.out(0), controlNetInstantIdApply.out(1),),
      "expand":graph.finalize()
    }


#==============================================================================
class PreprocessImageAdvanced:
  resize_modes = ["auto", "free", "scale by width", "scale by height"]
  upscale_methods = ["nearest-exact", "bilinear", "area", "bicubic", "lanczos"]

  def __init__(self):
      pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "image": ("IMAGE", { "tooltip": "Pose image." }),
        "mask": ("MASK",),
        "width": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "height": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "resize_mode": (self.resize_modes,),
        "upscale_method": (self.upscale_methods,),
        "pad_top": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "pad_right": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "pad_bottom": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "pad_left": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
      },
      "optional": {
        "insightface": ("INSIGHTFACE_APP",),
      }
    }

  RETURN_TYPES = ("IMAGE", "MASK", "IMAGE", "INT", "INT", "INT", "INT", "INT", "INT",)
  RETURN_NAMES = ("resized_image", "mask", "control_image", "x", "y", "original_width", "original_height", "new_width", "new_height",)
  FUNCTION = "preprocess_image"
  CATEGORY = CATEGORY_NAME

  def preprocess_image(
        self, image, mask, width, height, resize_mode, upscale_method,
        pad_top, pad_right, pad_bottom, pad_left, insightface = None
    ):

    p_x1, p_y1, p_x2, p_y2 = get_mask_bbox_with_padding(mask.squeeze(0), pad_top, pad_right, pad_bottom, pad_left)
    mask = mask[:, p_y1:p_y2, p_x1:p_x2]
    image = image[:, p_y1:p_y2, p_x1:p_x2]
    kps = get_kps_from_image(image, insightface) if insightface else None
    _, original_height, original_width, _ = image.shape

    if resize_mode == "auto":
       width, height = resize_to_fit_area(int(p_x2 - p_x1), int(p_y2 - p_y1), width, height)
    else:
      if resize_mode != "free":
        ratio = original_width / original_height
        if resize_mode == "scale by width":
          height = int(width / ratio)
        if resize_mode == "scale by height":
          width = int(height * ratio)

    width = (width // 8) * 8
    height = (height // 8) * 8

    mask = mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])).movedim(1, -1).expand(-1, -1, -1, 3)
    image = image.movedim(-1,1)
    mask = mask.movedim(-1,1)

    mask = comfy.utils.common_upscale(mask, width, height, "bilinear", "disabled")
    image = comfy.utils.common_upscale(image, width, height, upscale_method, "disabled")

    mask = mask.movedim(1,-1)
    mask = mask[:, :, :, 0]
    image = image.movedim(1,-1)
    _, new_height, new_width = mask.shape

    if kps is not None:
      kps *= [image.shape[2]  / original_width, image.shape[1] / original_height]
      control_image = draw_kps(width, height, kps)
      control_image = (torch.from_numpy(control_image).float() / 255.0).unsqueeze(0)

    return (
      image, mask,
      control_image if kps is not None else None,
      p_x1, p_y1, original_width, original_height,
      new_width, new_height,
    )


#==============================================================================
class PreprocessImage(PreprocessImageAdvanced):
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "image": ("IMAGE",),
        "mask": ("MASK",),
        "width": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "height": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "resize_mode": (self.resize_modes,),
        "pad": ("INT", {"default": 100, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
      },
      "optional": {
        "insightface": ("INSIGHTFACE_APP",),
      }
    }

  FUNCTION = "preprocess_image_simple"
  CATEGORY = CATEGORY_NAME

  def preprocess_image_simple(self, image, mask, width, height, resize_mode, pad, insightface = None):
    return self.preprocess_image(
       image, mask, width, height, resize_mode, "bilinear", pad, pad, pad, pad, insightface
    )


#==============================================================================
class LoadInsightface:
  def __init__(self):
    pass

  @classmethod
  def INPUT_TYPES(self):
    return {}

  RETURN_TYPES = ("INSIGHTFACE_APP",)
  RETURN_NAMES = ("insightface",)
  FUNCTION = "load_insightface"
  CATEGORY = CATEGORY_NAME

  def load_insightface(self):
    app = FaceAnalysis(
      name="antelopev2",
      root=INSIGHTFACE_PATH,
      providers=["CPUExecutionProvider", "CUDAExecutionProvider"]
    )
    app.prepare(ctx_id=0, det_size=(640, 640))
    return (app,)


#==============================================================================
class KpsDraw:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "width": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "height": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "kps": ("HIDDEN_STRING_JSON", ),
      },
      "optional": {
        "image_reference": ("IMAGE",),
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "draw_kps"
  CATEGORY = CATEGORY_NAME

  def draw_kps(self, width, height, kps, image_reference = None):
    return (kps,)
  
 
#==============================================================================
class Kps3dFromImage:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "width": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "height": ("INT", {"default": 1024, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "kps": ("HIDDEN_STRING_JSON", ),
      },
       "optional": {
        "image": ("IMAGE",),
      }
    }

  RETURN_TYPES = ("KPS_DATA_3D", "KPS_DATA",)
  RETURN_NAMES = ("kps_data_3d", "kps_data")
  FUNCTION = "make_kps"
  CATEGORY = CATEGORY_NAME

  def make_kps(self, width, height, kps, image):

    kps_2d = kps3d_to_kps2d(json.loads(kps))

    return (kps, json.dumps(kps_2d),)


#==============================================================================
class KpsMaker:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
      }
    }

  RETURN_TYPES = ("IMAGE",)
  RETURN_NAMES = ("control_image",)
  FUNCTION = "draw_kps"
  CATEGORY = CATEGORY_NAME

  def draw_kps(self, kps_data):
    kps_data = json.loads(kps_data)

    control_image = draw_kps(kps_data['width'], kps_data["height"], kps_data["array"], alphas = kps_data["opacities"])
    control_image = (torch.from_numpy(control_image).float() / 255.0).unsqueeze(0)

    return (control_image, )
  

#==============================================================================
class Kps2dRandomizer:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "tooltip": "The random seed used for randomizing KPS"}),
        "angle_min": ("INT", {"default": 0, "min": -180, "step": 1, "max": 180}),
        "angle_max": ("INT", {"default": 0, "min": -180, "step": 1, "max": 180}),
        "scale_min": ("FLOAT", {"default": 1, "min": 0.1, "step": 0.01, "max": 5}),
        "scale_max": ("FLOAT", {"default": 1, "min": 0.1, "step": 0.01, "max": 5}),
        "translate_x": ("INT", {"default": 0, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "translate_y": ("INT", {"default": 0, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
        "border": ("INT", {"default": 0, "min": 0, "step": 1, "max": MAX_RESOLUTION}),
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "rand_kps"
  CATEGORY = CATEGORY_NAME

  def rand_kps(self, kps_data, seed, angle_min, angle_max, scale_min, scale_max, translate_x, translate_y, border):

    torch.manual_seed(seed)
    kps_data = json.loads(kps_data)

    angle = 0
    scale = 1
    width = kps_data['width']
    height = kps_data['height']

    # get random angle
    if angle_min != 0 and angle_max != 0:
      angle = torch.randint(angle_min, angle_max + 1, (1,)).item()

    # get random scale
    if scale_min != 1 and scale_max != 1:
      scale = (scale_max - scale_min) * torch.rand(1).item() + scale_min

    # get random translate_x and translate_y
    if translate_x != 0:
        random_translate_x = torch.randint(-int(translate_x), int(translate_x) + 1, (1,)).item()
    else:
        random_translate_x = 0

    if translate_y != 0:
        random_translate_y = torch.randint(-int(translate_y), int(translate_y) + 1, (1,)).item()
    else:
        random_translate_y = 0

    # rotate
    if angle != 0:
      centroid = np.mean(np.array(kps_data["array"]), axis=0)
      angle_rad = np.radians(angle)

      rotated_points = []
      for x, y in kps_data["array"]:
        translated_x = x - centroid[0]
        translated_y = y - centroid[1]
        
        rotated_x = translated_x * np.cos(angle_rad) - translated_y * np.sin(angle_rad)
        rotated_y = translated_x * np.sin(angle_rad) + translated_y * np.cos(angle_rad)

        rotated_points.append([rotated_x + centroid[0], rotated_y + centroid[1]])

      kps_data["array"] = rotated_points
  
    # translate
    if random_translate_x != 0 or random_translate_y != 0:
        translated_points = []
        for x, y in kps_data["array"]:
            translated_points.append([x + random_translate_x, y + random_translate_y])
        kps_data["array"] = translated_points

    # scale
    if scale_min != 1 and scale_max != 1:
        scaled_points = []
        centroid = np.mean(np.array(kps_data["array"]), axis=0)
        for x, y in kps_data["array"]:
            scaled_points.append([
                centroid[0] + (x - centroid[0]) * scale,
                centroid[1] + (y - centroid[1]) * scale
            ])
        kps_data["array"] = scaled_points

    # check border
    x_values = [x for x, _ in kps_data["array"]]
    y_values = [y for _, y in kps_data["array"]]

    min_x, max_x = min(x_values), max(x_values)
    min_y, max_y = min(y_values), max(y_values)

    shift_x = 0
    shift_y = 0

    if min_x < border:
      shift_x = border - min_x 
    elif max_x > width - border:
      shift_x = (width - border) - max_x 

    if min_y < border:
        shift_y = border - min_y 
    elif max_y > height - border:
        shift_y = (height - border) - max_y 



    final_output = []
    for x, y in kps_data["array"]:
        shifted_x = x + shift_x
        shifted_y = y + shift_y
        final_output.append([int(shifted_x), int(shifted_y)])

    kps_data["array"] = final_output

    return (json.dumps(kps_data), )

#==============================================================================
class Kps3dRandomizer:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data_3d": ("KPS_DATA_3D",),
        "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "tooltip": "The random seed used for randomizing KPS"}),
        "rotate_x": ("INT", {"default": 0, "min": -180, "step": 1, "max": 180}),
        "rotate_y": ("INT", {"default": 0, "min": -180, "step": 1, "max": 180}),
        "rotate_z": ("INT", {"default": 0, "min": -180, "step": 1, "max": 180})
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "rand_kps"
  CATEGORY = CATEGORY_NAME

  def rand_kps(self, kps_data_3d, seed, rotate_x, rotate_y, rotate_z):
    torch.manual_seed(seed)
    kps_data = json.loads(kps_data_3d)

    angle_x = 0
    if rotate_x != 0:
      angle_x = torch.randint(-rotate_x, rotate_x + 1, (1,)).item()
    angle_y = 0
    if rotate_y != 0:
      angle_y = torch.randint(-rotate_y, rotate_y + 1, (1,)).item()
    angle_z = 0
    if rotate_x != 0:
      angle_z = torch.randint(-rotate_z, rotate_z + 1, (1,)).item()

    angle_x += kps_data['rotateX']
    angle_y += kps_data['rotateY']
    angle_z += kps_data['rotateZ']
    if angle_x != 0 or angle_y != 0 or angle_z != 0: 
      points = kps_rotate_3d(kps_data['array'], angle_x, angle_y, angle_z)
    else:
      points = kps_data['array']

    kps_data['array'] = points
    kps_data = kps3d_to_kps2d(kps_data)
  
    return (json.dumps(kps_data), )


#==============================================================================
class Kps2dScaleBy:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "scale": ("FLOAT", {"default": 1, "min": 0, "max": 100}),
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "scale_kps_by"
  CATEGORY = CATEGORY_NAME

  def scale_kps_by(self, kps_data, scale):
    kps_data = json.loads(kps_data)

    points = kps_data['array']
    kps_data['width'] = int(kps_data['width'] * scale)
    kps_data['height'] = int(kps_data['height'] * scale)
    for i, point in enumerate(points):
      kps_data['array'][i][0] = int(point[0] * scale)
      kps_data['array'][i][1] = int(point[1] * scale)
  
    return (json.dumps(kps_data), )


#==============================================================================
class Kps2dScale:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "width": ("INT", {"default": 1024, "min": 0, "max": MAX_RESOLUTION}),
        "height": ("INT", {"default": 1024, "min": 0, "max": MAX_RESOLUTION}),
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "scale_kps"
  CATEGORY = CATEGORY_NAME

  def scale_kps(self, kps_data, width, height):
    kps_data = json.loads(kps_data)

    points = kps_data['array']
    scaleX =  width / kps_data['width']
    scaleY =  height / kps_data['height']
    kps_data['width'] = int(kps_data['width'] * scaleX)
    kps_data['height'] = int(kps_data['height'] * scaleY)

    for i, point in enumerate(points):
      kps_data['array'][i][0] = int(point[0] * scaleX)
      kps_data['array'][i][1] = int(point[1] * scaleY)
  
    return (json.dumps(kps_data), )
  

#==============================================================================
class Kps2dRotate:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "angle": ("FLOAT", {"default": 0.0, "min": -360.0, "step": 0.1, "max": 360.0},),
        "counter_clockwise": ("BOOLEAN", {"default": True},),
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "rotate_kps"
  CATEGORY = CATEGORY_NAME

  def rotate_kps(self, kps_data, angle, counter_clockwise):
    if angle == 0 or angle == 360:
      return (kps_data,)

    if counter_clockwise: angle = -angle

    kps_data = json.loads(kps_data)

    points = kps_data['array']
    new_width, new_height = calculate_size_after_rotation(kps_data['width'], kps_data['height'], angle)

    kps_data['array'] = kps_rotate_2d(points, kps_data['width'], kps_data['height'], int(new_width), int(new_height), angle)
    kps_data['width'] = int(new_width)
    kps_data['height'] = int(new_height)

    return (json.dumps(kps_data), )
  
#==============================================================================
class Kps2dCrop:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "x": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 1}),
        "y": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 1}),
        "width": ("INT", {"default": 1024, "min": 1, "max": MAX_RESOLUTION, "step": 1}),
        "height": ("INT", {"default": 1024, "min": 1, "max": MAX_RESOLUTION, "step": 1}),
      }
    }

  RETURN_TYPES = ("KPS_DATA",)
  RETURN_NAMES = ("kps_data",)
  FUNCTION = "crop_kps"
  CATEGORY = CATEGORY_NAME

  def crop_kps(self, kps_data, x, y, width, height):
    kps_data = json.loads(kps_data)

    kps_data['width'] = width
    kps_data['height'] = height

    points = kps_data['array']

    for i, point in enumerate(points):
      kps_data['array'][i][0] = point[0] - x#
      kps_data['array'][i][1] = point[1] - y#

    return (json.dumps(kps_data), )
  

#==============================================================================
class MaskFromKps:
  @classmethod
  def INPUT_TYPES(self):
    return {
      "required": {
        "kps_data": ("KPS_DATA",),
        "grow_by": ("INT", {"default": 4, "min": 1, "max": 10, "step": 1}),
      }
    }

  RETURN_TYPES = ("MASK",)
  RETURN_NAMES = ("mask",)
  FUNCTION = "creat_mask"
  CATEGORY = CATEGORY_NAME

  def creat_mask(self, kps_data, grow_by):
    kps_data = json.loads(kps_data)
    bbox = get_bbox_from_kps(kps_data, grow_by)
    mask = torch.zeros((kps_data['height'], kps_data['width']))
    mask[bbox[0][1]:bbox[1][1], bbox[0][0]:bbox[1][0]] = 1

    return (mask.unsqueeze(0), )

NODE_CLASS_MAPPINGS = {
  "LoadInsightface": LoadInsightface,
  "LoadInstantIdAdapter": LoadInstantIdAdapter,
  "InstantIdAdapterApply": InstantIdAdapterApply,
  "ControlNetInstantIdApply": ControlNetInstantIdApply,
  "InstantIdAndControlnetApply": InstantIdAndControlnetApply,
  "PreprocessImage": PreprocessImage,
  "PreprocessImageAdvanced": PreprocessImageAdvanced,
  "AngleFromFace": AngleFromFace,
  "AngleFromKps": AngleFromKps,
  "RotateImage": RotateImage,
  "ComposeRotated": ComposeRotated,
  "KpsDraw": KpsDraw,
  "Kps3dFromImage": Kps3dFromImage,
  "KpsMaker": KpsMaker,
  "Kps2dRandomizer": Kps2dRandomizer,
  "Kps3dRandomizer": Kps3dRandomizer,
  "KpsScale": Kps2dScale,
  "KpsScaleBy": Kps2dScaleBy,
  "KpsRotate": Kps2dRotate,
  "KpsCrop": Kps2dCrop,
  "MaskFromKps": MaskFromKps,
  "FaceEmbed": FaceEmbed,
  "FaceEmbedCombine": FaceEmbedCombine

}

NODE_DISPLAY_NAME_MAPPINGS = {
  "LoadInsightface": "Load insightface",
  "LoadInstantIdAdapter": "Load instantId adapter",
  "InstantIdAdapterApply": "Apply instantId adapter",
  "ControlNetInstantIdApply": "Apply instantId ControlNet",
  "InstantIdAndControlnetApply": "Apply instantId and ControlNet",
  "PreprocessImage": "Preprocess image for instantId",
  "PreprocessImagAdvancese": "Preprocess image for instantId (Advanced)",
  "AngleFromFace": "Get Angle from face",
  "AngleFromKps": "Get Angle from KPS data",
  "RotateImage": "Rotate Image",
  "ComposeRotated": "Remove rotation padding",
  "KpsDraw": "Draw KPS",
  "Kps3dFromImage": "3d KPS from image",
  "KpsMaker": "Create KPS Image",
  "Kps2dRandomizer": "Randomize 2d KPS",
  "Kps3dRandomizer": "Randomize 3d KPS",
  "Kps2dScaleBy": "Scale 2d KPS by",
  "Kps2dScale": "Scale 2d KPS",
  "KpsRotate": "Rotate 2d KPS",
  "KpsCrop": "Crop 2d KPS",
  "MaskFromKps": "Create mask from Kps",
  "FaceEmbed": "FaceEmbed for instantId",
  "FaceEmbedCombine": "FaceEmbed Combine"
}

================================================
FILE: pyproject.toml
================================================
[project]
name = "comfyui-instantid-faceswap"
description = "Implementation of [a/faceswap](https://github.com/nosiu/InstantID-faceswap/tree/main) based on [a/InstantID](https://github.com/InstantID/InstantID) for ComfyUI."
version = "0.1.1"
license = { file = "LICENSE.txt" }
dependencies = ["insightface", "onnxruntime-gpu"]

[project.urls]
Repository = "https://github.com/nosiu/comfyui-instantId-faceswap"

[tool.comfy]
PublisherId = "nosiu"
DisplayName = "comfyui-instantId-faceswap"
Icon = ""

================================================
FILE: requirements.txt
================================================
insightface
onnxruntime-gpu

================================================
FILE: ui/dialogs.js
================================================
import { createShader, vertexShaderSrc, fragmentShaderSrc } from "./shaders.js"
import { getPointsCenter, drawKps, checkWebGlSupport, rotatePoints3D } from "./helpers.js"
import { createSlider, createButton, createRadiobox } from "./uiHelpers.js"

class KPSDialogBase {
    constructor(w, h, img = undefined) {
        this.isDragging = false
        this.draggedPointIndex = null
        this.mousedown_x = undefined
        this.mousedown_y = undefined
        this.mousedown_pan_x = undefined
        this.mousedown_pan_y = undefined
        this.pan_x = 0
        this.pan_y = 0
        this.cursorX = undefined
        this.cursorY = undefined
        this.zoom_ratio = 1
        this.min_zoom = undefined
        this.showOpacities = false
        if (img) {
          this.canvasWidth = img.width
          this.canvasHeight = img.height
        } else {
          this.canvasWidth = w
          this.canvasHeight = h
        }

        // ---------------------------------
        this.element = document.createElement("div")
        this.element.style.display = "none"
        this.element.style.width = "80vw"
        this.element.style.height = "80vh"
        this.element.style.zIndex = 8888
        this.element.classList.add('comfy-modal')
        this.element.classList.add('kps-sandbox')

        document.body.appendChild(this.element)

        this.canvas = document.createElement("canvas")
        this.canvas.style.position = "absolute"
        this.canvas.style.pointerEvents = "auto"
        this.canvas.style.zIndex = "-1"
        this.element.appendChild(this.canvas)

        this.canvas.width = this.canvasWidth
        this.canvas.height = this.canvasHeight
    }

    initializeCanvasPanZoom () {
      let drawWidth = this.canvasWidth;
      let drawHeight = this.canvasHeight;
      let width = this.element.clientWidth;
      let height = this.element.clientHeight;
  
      if (this.canvasWidth > width) {
        drawWidth = width;
        drawHeight = drawWidth / this.canvasWidth * this.canvasHeight;
      }
      if (drawHeight > height) {
        drawHeight = height;
        drawWidth = drawHeight / this.canvasHeight * this.canvasWidth;
      }
      this.zoom_ratio = drawWidth / this.canvasWidth;
      this.min_zoom = drawWidth / this.canvasWidth
  
      const canvasX = (width - drawWidth) / 2;
      const canvasY = (height - drawHeight) / 2;
      this.pan_x = canvasX;
      this.pan_y = canvasY;
      this.invalidatePanZoom();
    }
  
    invalidatePanZoom () {
      let raw_width = this.canvasWidth * this.zoom_ratio;
      let raw_height = this.canvasHeight * this.zoom_ratio;
      if (this.pan_x + raw_width < 10) {
        this.pan_x = 10 - raw_width;
      }
      if (this.pan_y + raw_height < 10) {
        this.pan_y = 10 - raw_height;
      }
  
      this.canvas.style.width = `${raw_width}px`;
      this.canvas.style.height = `${raw_height}px`;
      this.canvas.style.left = `${this.pan_x}px`;
      this.canvas.style.top = `${this.pan_y}px`;
  
      if (this.hasImage) {
        this.imageCanvas.style.width = `${raw_width}px`;
        this.imageCanvas.style.height = `${raw_height}px`;
        this.imageCanvas.style.left = `${this.pan_x}px`;
        this.imageCanvas.style.top = `${this.pan_y}px`;
      }
    }

    setBasicControls () {
      const buttonBar = document.createElement("div")
      buttonBar.id = "instantIdButtonBar"
      buttonBar.style.position = "absolute"
      buttonBar.style.bottom = "0"
      buttonBar.style.height = "50px"
      buttonBar.style.left = "20px"
      buttonBar.style.right = "20px"
      buttonBar.style.pointerEvents = "none"
      buttonBar.appendChild(createButton("Save", true, this.save.bind(this)))
      buttonBar.appendChild(createButton("Cancel", true, this.closeModal.bind(this)))
      buttonBar.appendChild(createButton("Reset pan & zoom", false, () => {
        this.initializeCanvasPanZoom();
      }));
      buttonBar.appendChild(createButton("Reset KPS", false, () => {
        this.kps = this.getDefaultKps();
        this.draw();
      }));

      buttonBar.appendChild(this.createZoomSlider())
  
      this.element.appendChild(buttonBar);
      // ----------------------
      const opacitiesButton = createButton(
        "Opacity options", false, () => {
          this.showOpacities = !this.showOpacities
          if (this.showOpacities) {
            opacitiesButton.innerText = "Hide options"
            const advancedDiv = document.createElement("div")
            advancedDiv.style.overflow = "auto"
            advancedDiv.id = "kpsDialog0"
            advancedDiv.style.padding = "20px"
            advancedDiv.style.paddingTop = "50px"
            advancedDiv.style.width = "200px"
            advancedDiv.style.height = "100%"
            advancedDiv.style.position = "absolute"
            advancedDiv.style.display = "flex"
            advancedDiv.style.left = "0"
            advancedDiv.style.top = "0"
            advancedDiv.style.backgroundColor = "black"
            advancedDiv.style.color = "white"
  
            const radioBar = document.createElement("div");
            radioBar.style.marginTop = "20px"
            radioBar.style.pointerEvents = "auto"
  
            radioBar.appendChild(createRadiobox("red", "red opacity", this.kpsOpacities, 0, this.draw.bind(this)))
            radioBar.appendChild(createRadiobox("green", "green opacity", this.kpsOpacities, 1, this.draw.bind(this)))
            radioBar.appendChild(createRadiobox("blue", "blue opacity", this.kpsOpacities, 2, this.draw.bind(this)))
            radioBar.appendChild(createRadiobox("yellow", "yellow opacity", this.kpsOpacities, 3, this.draw.bind(this)))
            radioBar.appendChild(createRadiobox("purple", "purple opacity", this.kpsOpacities, 4, this.draw.bind(this)))

            advancedDiv.appendChild(radioBar);
            this.element.appendChild(advancedDiv)
          } else {
            opacitiesButton.innerText = "Opacity options"
            const el = document.querySelector("#kpsDialog0")
            if (el) el.remove()
          }
        }
      )
      opacitiesButton.style.zIndex = 8889
      opacitiesButton.style.position = "absolute"
      this.element.appendChild(opacitiesButton);
    }

    createZoomSlider () {
      const el = createSlider("Zoom", "instantIdZoomSlider", this.min_zoom, "2", "0.1", this.zoom_ratio, (event) => {
        this.zoom_ratio = parseFloat(event.target.value)
        this.invalidatePanZoom()
      })
      return el
    }

    drawMoveAll () {
      const pad = 20
      const w = 60
      const ctx = this.canvas.getContext('2d')
      let x = this.kps.reduce((a, b) => a[0] > b[0] ? a : b)[0] + pad
      let y = this.kps.reduce((a, b) => a[1] > b[1] ? a : b)[1] + pad
  
      ctx.beginPath();
      ctx.fillStyle = "rgb(8, 105, 216)";
  
      ctx.beginPath();
      ctx.rect(x, y, w, w);
      ctx.fill();
  
      ctx.font = '40px Arial';
      ctx.fillStyle = 'white';
      ctx.textAlign = 'center';
      ctx.fillText('M', x + 30, y + 40);
    }
}

export class KPSDialog2d extends KPSDialogBase{
  constructor(w, h, referenceImage, angleWidget, kpsJsonWidget) {
    super(w, h, referenceImage)

    this.hasImage = referenceImage ? true  : false
    if (this.hasImage) {
      this.opacity = "0.6"
    } else {
      this.opacity = "1"
    }

    this.kpsOpacities = kpsJsonWidget.value.opacities.length ? JSON.parse(JSON.stringify(kpsJsonWidget.value.opacities)) : [1, 1, 1, 1, 1]
    this.kps = kpsJsonWidget.value.array.length ? JSON.parse(JSON.stringify(kpsJsonWidget.value.array)) : this.getDefaultKps()

    this.angleWidget = angleWidget
    this.kpsJsonWidget = kpsJsonWidget
    
    if (this.hasImage) {
      this.imageCanvas = document.createElement("canvas")
      this.imageCanvas.style.position = "absolute"
      this.imageCanvas.style.zIndex = "-2"
      this.imageCanvas.style.pointerEvents = "none"
      this.element.appendChild(this.imageCanvas)
      this.imageCanvas.width = this.canvasWidth
      this.imageCanvas.height = this.canvasHeight
    }

    this.setBasicControls()
    this.setControls()
    this.attachListeners()
    this.canvas.style.opacity = this.opacity
    this.element.style.display = "block"
    this.initializeCanvasPanZoom()

    this.draw()
    if (this.hasImage) this.drawImage(referenceImage)
  }

  getDefaultKps () {
    const halfWidth = this.canvasWidth / 2;
    const halfHeight = this.canvasHeight / 2;
    return [
      [halfWidth - halfWidth / 2, halfHeight - halfHeight / 2],
      [halfWidth + halfWidth / 2, halfHeight - halfHeight / 2],
      [halfWidth, halfHeight],
      [halfWidth - halfWidth / 2, halfHeight + halfHeight / 2],
      [halfWidth + halfWidth / 2, halfHeight + halfHeight / 2],
    ]
  }

  setControls () {
    const buttonBar = document.querySelector("#instantIdButtonBar")
    if (this.hasImage) {
      buttonBar.appendChild(this.createOpacitySlider())
    }
  }

  createOpacitySlider () {
    const el = createSlider("Opacity", "instantIdOpacitySlider", "0.1", "1", "0.1", this.opacity, (event) => {
      this.opacity = event.target.value
      this.canvas.style.opacity = event.target.value
    })
    return el
  }

  attachListeners () {
    this.canvas.addEventListener('mousedown', this.mouseDown.bind(this))
    this.canvas.addEventListener('mousemove', this.mouseMove.bind(this))
    this.canvas.addEventListener('mouseup', this.mouseUp.bind(this))
    this.element.addEventListener('wheel', this.wheel.bind(this))
    this.element.addEventListener('DOMMouseScroll', (e) => e.preventDefault()) // thanks firefox.
    this.element.addEventListener('keydown', (event) => {
      if (event.key === "Escape") {
        this.closeModal()
      } else if (event.key === "ENTER") {
        this.save()
      }
    })
  }

  closeModal () {
    document.body.removeChild(this.element)
  }

  async save () {

    const minX = Math.min(...this.kps.map(e => e[0]))
    const maxX = Math.max(...this.kps.map(e => e[0]))

    const minY = Math.min(...this.kps.map(e => e[1]))
    const maxY = Math.max(...this.kps.map(e => e[1]))

    this.kpsJsonWidget.value = {
      array: this.kps,
      opacities: this.kpsOpacities,
      width: this.canvasWidth,
      height: this.canvasHeight,
      bbox: [
        [
          Math.max(Math.ceil(minX - ((maxX - minX) /3)), 0),
          Math.max(Math.ceil(minY - ((maxY - minY) /3)), 0)
        ],
        [
          Math.min(Math.ceil(maxX + ((maxX - minX) /3)), this.canvasWidth),
          Math.min(Math.ceil(maxY + ((maxY - minY) /3)), this.canvasHeight)
        ],       
      ]
    }

    this.kpsJsonWidget.callback()
    const a = this.kps[0]
    const b = this.kps[1]
    let angle = Math.atan2(b[1] - a[1], b[0] - a[0]) * 180 / Math.PI

    this.angleWidget.value = angle
    this.closeModal()
  }

  changePointsPosition(closer = false, step = 10) {
    step /= this.zoom_ratio;
    const center = this.kps[2]

    const points = this.kps

    const magnitudes = points.map((point, index) => {
        if (index === 2) return Infinity; // Skip the center point (no magnitude calculation)
        const direction = [center[0] - point[0], center[1] - point[1]]
        return Math.sqrt(direction[0] * direction[0] + direction[1] * direction[1])
    });

    const allGreaterThan50 = magnitudes.every(mag => mag > 10)

    return points.map((point, index) => {
        if (index === 2) return point
        const direction = [center[0] - point[0], center[1] - point[1]]
        const magnitude = magnitudes[index]

        const scaleFactor = closer ? (magnitude - step) / magnitude : (magnitude + step) / magnitude;
        const scaledMoveVector = [direction[0] * scaleFactor, direction[1] * scaleFactor];

        if (closer) {
            if (allGreaterThan50) {
                return [
                    center[0] - scaledMoveVector[0],
                    center[1] - scaledMoveVector[1]
                ];
            }
            return point
        } else {
            return [
                center[0] - scaledMoveVector[0],
                center[1] - scaledMoveVector[1]
            ];
        }
    });
  }


  mouseDown (event) {
    event.preventDefault()
    const { offsetX: mouseX, offsetY: mouseY } = event

        if (event.ctrlKey) {
          if (event.buttons == 1) {
            this.mousedown_x = event.clientX
            this.mousedown_y = event.clientY
            this.mousedown_pan_x = this.pan_x
            this.mousedown_pan_y = this.pan_y
          }
          return;
        } else {
      let maxX = -Infinity
      let maxY = -Infinity

            this.kps.forEach((kp, idx) => {
              let [x, y] = kp;
              x *= this.zoom_ratio
              y *= this.zoom_ratio
        maxX = x > maxX ? x : maxX
        maxY = y > maxY ? y : maxY
              const distance = Math.sqrt((x - mouseX) ** 2 + (y - mouseY) ** 2)
              if (distance < 20 * (this.zoom_ratio)) {
          this.isDragging = true;
          this.draggedPointIndex = idx;
          return;
              }
            });
      maxX += 20 * this.zoom_ratio
      maxY += 20 * this.zoom_ratio
      if ((mouseX >= maxX) && (mouseY >= maxY) && (mouseX < maxX + 60 * this.zoom_ratio) && (mouseY < maxY + 60 * this.zoom_ratio)){
        this.mousedown_x = event.clientX;
        this.mousedown_y = event.clientY;
        this.isDragging = true;
        this.draggedPointIndex = -1;
      }
        }
  }

  mouseMove (event) {
    event.preventDefault();
    const { offsetX: mouseX, offsetY: mouseY } = event
    this.cursorX = event.pageX
    this.cursorY = event.pageY
    if (event.ctrlKey) {
      if (event.buttons == 1) {
        if (this.mousedown_x) {
          let deltaX = this.mousedown_x - event.clientX
          let deltaY = this.mousedown_y - event.clientY
          this.pan_x = this.mousedown_pan_x - deltaX
          this.pan_y = this.mousedown_pan_y - deltaY
          this.invalidatePanZoom()
        }
      }
    }
    if (this.isDragging) {
      const transformedX = (mouseX) / this.zoom_ratio
      const transformedY = (mouseY) / this.zoom_ratio
      if(this.draggedPointIndex !== null && this.draggedPointIndex > -1) {
          this.kps[this.draggedPointIndex] = [transformedX, transformedY]
      } else if (this.draggedPointIndex === -1) {
        let deltaX = this.mousedown_x - event.clientX
        let deltaY = this.mousedown_y - event.clientY
        this.mousedown_x = event.clientX
        this.mousedown_y = event.clientY

        this.kps.forEach(el => {
          el[0] -= deltaX / this.zoom_ratio
          el[1] -= deltaY / this.zoom_ratio
        })
      }
      this.draw()
    }
  }

  mouseUp (event) {
    event.preventDefault()
    this.mousedown_x = null
    this.mousedown_y = null
    this.isDragging = false
    this.draggedPointIndex = null
  }

  wheel (event) {
    event.preventDefault()
    if (event.ctrlKey) {
      if (event.deltaY < 0) {
        this.zoom_ratio = Math.min(2, this.zoom_ratio + 0.2)
      } else {
        this.zoom_ratio = Math.max(this.min_zoom, this.zoom_ratio - 0.2)
      }
      document.querySelector("#instantIdZoomSlider input").value = `${this.zoom_ratio}`
      this.invalidatePanZoom();
    }
    else if (event.altKey) {
      this.kps = this.changePointsPosition(event.deltaY > 0)
      this.draw()
    }
  }

  draw () {
    this.drawKeyPoints()
    this.drawMoveAll()
  }


  drawKeyPoints (canvas = this.canvas) {
    drawKps(canvas, this.kps, this.kpsOpacities)
  }

  drawImage (ref_image) {
    /*
    webgl2 is used to render the reference background
    with masked images it is impossible to get pixel values
    */
    if (checkWebGlSupport()) {
        const gl = this.imageCanvas.getContext("webgl2");
        this.drawImageWebGL2(gl, ref_image)
        return
    }
    const ctx = this.imageCanvas.getContext("2d")
    ctx.drawImage(ref_image, 0, 0);
  }

  drawImageWebGL2 (gl, image) {
    const program = gl.createProgram()
    const vertexShader = createShader(gl, gl.VERTEX_SHADER, vertexShaderSrc)
    const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragmentShaderSrc)

    if (!vertexShader || !fragmentShader) {
      return;
    }

    gl.attachShader(program, vertexShader)
    gl.attachShader(program, fragmentShader)
    gl.linkProgram(program);

    if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
      console.error(gl.getProgramInfoLog(program))
      return;
    }

    gl.useProgram(program);

    const buffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
    gl.bufferData(
      gl.ARRAY_BUFFER,
      new Float32Array([
        -1,  1,  0, 1,
        -1, -1,  0, 0,
         1,  1,  1, 1,
         1, -1,  1, 0,
      ]),
      gl.STATIC_DRAW
    )

    gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 4 * 4, 0)
    gl.enableVertexAttribArray(0)
    gl.vertexAttribPointer(1, 2, gl.FLOAT, false, 4 * 4, 2 * 4)
    gl.enableVertexAttribArray(1)

    gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
    gl.activeTexture(gl.TEXTURE0)
    gl.uniform1i(gl.getUniformLocation(program, 'uSampler'), 0)

    const texture = gl.createTexture()
    gl.bindTexture(gl.TEXTURE_2D, texture)
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, this.imageCanvas.width, this.imageCanvas.height, 0, gl.RGB, gl.UNSIGNED_BYTE, image)
    gl.generateMipmap(gl.TEXTURE_2D)

    gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4)
  }
}


export class KPSDialog3d extends KPSDialogBase{
  constructor(w, h, angleWidget, kpsJsonWidget) {
    super(w, h)

    this.showLandmarks = false

    this.defaultKpsData = kpsJsonWidget.value.defaultKpsData

    this.kpsOpacities = kpsJsonWidget.value.opacities.length ? JSON.parse(JSON.stringify(kpsJsonWidget.value.opacities)) : [1, 1, 1, 1, 1]
    this.kps = JSON.parse(JSON.stringify(kpsJsonWidget.value.array))

    this.angleWidget = angleWidget
    this.kpsJsonWidget = kpsJsonWidget
    
   
    const landmarks = [
      'jawline', 'eyebrow_left', 'eyebrow_right', 'nose_bridge', 'nose_lower',
      'eye_left', 'eye_right', 'mouth_outer', 'mouth_inner'
    ]
    landmarks.forEach(el => {
      this[el] = kpsJsonWidget.value[el].length ? JSON.parse(JSON.stringify(kpsJsonWidget.value[el])) : []
    })

    this.rotateX = kpsJsonWidget.value.rotateX || 0
    this.rotateY = kpsJsonWidget.value.rotateY || 0
    this.rotateZ = kpsJsonWidget.value.rotateZ || 0

    this.setBasicControls()
    this.setControls()
    this.attachListeners()

    this.element.style.display = "block"
    this.initializeCanvasPanZoom()

    this.draw()
    if (this.hasImage) this.drawImage(referenceImage)
  }

  getDefaultKps () {
    try {
      const data = JSON.parse(this.defaultKpsData)
      const landmarks = [
        'jawline', 'eyebrow_left', 'eyebrow_right', 'nose_bridge', 'nose_lower',
        'eye_left', 'eye_right', 'mouth_outer', 'mouth_inner'
      ]
      landmarks.forEach(el => {
        this[el] = data[el]
      })

      return data.array
    } catch (e) {
      console.error(e)
    }
  }

  setControls () {
    const buttonBar = document.querySelector("#instantIdButtonBar")
    buttonBar.appendChild(this.createRotationXSlider())
    buttonBar.appendChild(this.createRotationYSlider())
    buttonBar.appendChild(this.createRotationZSlider())

    this.element.appendChild(buttonBar);
  }

  createRotationXSlider () {
    const el = createSlider("rotate X", "instantIdRotateX", "0", "360", "1", this.rotateX, (event) => {
      this.rotateX = parseFloat(event.target.value)
      this.draw()
    })
    return el
  }

  createRotationYSlider () {
    const el = createSlider("rotate Y", "instantIdRotateY", "0", "360", "1", this.rotateY, (event) => {
      this.rotateY = parseFloat(event.target.value)
      this.draw()
    })
    return el
  }

  createRotationZSlider () {
    const el = createSlider("rotate Z", "instantIdRotateZ", "0", "360", "1", this.rotateZ, (event) => {
      this.rotateZ = parseFloat(event.target.value)
      this.draw()
    })
    return el
  }

  attachListeners () {
    this.canvas.addEventListener('mousedown', this.mouseDown.bind(this))
    this.canvas.addEventListener('mousemove', this.mouseMove.bind(this))
    this.canvas.addEventListener('mouseup', this.mouseUp.bind(this))
    this.element.addEventListener('wheel', this.wheel.bind(this))
    this.element.addEventListener('DOMMouseScroll', (e) => e.preventDefault()) // thanks firefox.
    this.element.addEventListener('keydown', (event) => {
      if (event.key === "Escape") {
        this.closeModal()
      } else if (event.key === "ENTER") {
        this.save()
      }
    })
  }

  closeModal () {
    document.body.removeChild(this.element)
  }

  async save () {
    this.kpsJsonWidget.value = {
      array: this.kps,
      opacities: this.kpsOpacities.map(el => parseFloat(el)),
      width: this.canvasWidth,
      height: this.canvasHeight,
      jawline: this.jawline,
      eyebrow_left: this.eyebrow_left,
      eyebrow_right: this.eyebrow_right,
      nose_bridge: this.nose_bridge,
      nose_lower: this.nose_lower,
      eye_left: this.eye_left,
      eye_right: this.eye_right,
      mouth_outer: this.mouth_outer,
      mouth_inner: this.mouth_inner,
      rotateX: this.rotateX,
      rotateY: this.rotateY,
      rotateZ: this.rotateZ,
      defaultKpsData: this.defaultKpsData
    }
 
    this.kpsJsonWidget.callback()
    const a = this.kps[0]
    const b = this.kps[1]
    let angle = Math.atan2(b[1] - a[1], b[0] - a[0]) * 180 / Math.PI

    this.angleWidget.value = angle
    this.closeModal()
  }

  changePointsPosition(closer = false, step = 10) {
    step /= this.zoom_ratio

    const center = getPointsCenter([
      ...this.kps, ...this.jawline, ...this.eyebrow_left, ...this.eyebrow_right, ...this.nose_bridge,
      ...this.nose_lower,...this.eye_left, ...this.eye_right, ...this.mouth_outer, ...this.mouth_inner
    ])

    const points = [
      ...this.kps, ...this.jawline, ...this.eyebrow_left, ...this.eyebrow_right, ...this.nose_bridge,
      ...this.nose_lower,...this.eye_left, ...this.eye_right, ...this.mouth_outer, ...this.mouth_inner
    ]

    const magnitudes = points.map(point => {
        const direction = [center[0] - point[0], center[1] - point[1], center[2] - point[2]]
        return Math.sqrt(direction[0] * direction[0] + direction[1] * direction[1] + direction[2] * direction[2])
    })

    const allGreaterThan50 = magnitudes.every(mag => mag > 10)

    points.forEach((point, index) => {
        const direction = [center[0] - point[0], center[1] - point[1], center[2] - point[2]]
        const magnitude = magnitudes[index]
        const unitVector = [direction[0] / magnitude, direction[1] / magnitude, direction[2] / magnitude]

        const scaleFactor = magnitude / Math.max(...magnitudes)
        const adjustedStep = step * scaleFactor
        const moveVector = [unitVector[0] * adjustedStep, unitVector[1] * adjustedStep, unitVector[2] * adjustedStep]

        if (closer) {
            if (allGreaterThan50) {
                point[0] += moveVector[0]
                point[1] += moveVector[1]
                point[2] += moveVector[2]
            }
        } else {
            point[0] -= moveVector[0]
            point[1] -= moveVector[1]
            point[2] -= moveVector[2]
        }
    })
  }

  mouseDown (event) {
    event.preventDefault()
    const { offsetX: mouseX, offsetY: mouseY } = event

        if (event.ctrlKey) {
          if (event.buttons == 1) {
            this.mousedown_x = event.clientX
            this.mousedown_y = event.clientY
            this.mousedown_pan_x = this.pan_x
            this.mousedown_pan_y = this.pan_y
          }
          return;
        } else {
          let maxX = -Infinity
          let maxY = -Infinity

          this.kps.forEach((kp) => {
            let [x, y] = kp
            x *= this.zoom_ratio
            y *= this.zoom_ratio
            maxX = x > maxX ? x : maxX
            maxY = y > maxY ? y : maxY
          });
          maxX += 20 * this.zoom_ratio
          maxY += 20 * this.zoom_ratio
          if ((mouseX >= maxX) && (mouseY >= maxY) && (mouseX < maxX + 60 * this.zoom_ratio) && (mouseY < maxY + 60 * this.zoom_ratio)){
            this.mousedown_x = event.clientX
            this.mousedown_y = event.clientY
            this.isDragging = true;
            this.draggedPointIndex = -1
          }
        }
  }

  mouseMove (event) {
    event.preventDefault();
    this.cursorX = event.pageX
    this.cursorY = event.pageY
    if (event.ctrlKey) {
      if (event.buttons == 1) {
        if (this.mousedown_x) {
          let deltaX = this.mousedown_x - event.clientX
          let deltaY = this.mousedown_y - event.clientY
          this.pan_x = this.mousedown_pan_x - deltaX
          this.pan_y = this.mousedown_pan_y - deltaY
          this.invalidatePanZoom()
        }
      }
    }
    if (this.isDragging) {
      let deltaX = this.mousedown_x - event.clientX
      let deltaY = this.mousedown_y - event.clientY
      this.mousedown_x = event.clientX
      this.mousedown_y = event.clientY
      const points = [
        this.kps, this.jawline, this.eyebrow_left, this.eyebrow_right, this.nose_bridge,
        this.nose_lower, this.eye_left, this.eye_right, this.mouth_outer, this.mouth_inner
      ]
      points.forEach(p => {
        p.forEach(el => {
          el[0] -= deltaX / this.zoom_ratio
          el[1] -= deltaY / this.zoom_ratio
        })
      })
      this.draw()
    }
  }

  mouseUp (event) {
    event.preventDefault()
    this.mousedown_x = null
    this.mousedown_y = null
    this.isDragging = false
    this.draggedPointIndex = null
  }

  wheel (event) {
    event.preventDefault()
    if (event.ctrlKey) {
      if (event.deltaY < 0) {
        this.zoom_ratio = Math.min(2, this.zoom_ratio + 0.2)
      } else {
        this.zoom_ratio = Math.max(this.min_zoom, this.zoom_ratio - 0.2)
      }
      document.querySelector("#instantIdZoomSlider input").value = `${this.zoom_ratio}`
      this.invalidatePanZoom();
    }
    else if (event.altKey) {
      this.changePointsPosition(event.deltaY > 0)
      this.draw()
    }
  }

  draw () {
    this.drawKeyPoints()
    const landmarks = [
      this.jawline, this.eyebrow_left, this.eyebrow_right, this.nose_bridge,
      this.nose_lower, this.eye_left, this.eye_right, this.mouth_inner, this.mouth_outer
    ]
    landmarks.forEach(points => { this.drawLandmarks(points) })
    this.drawMoveAll()
  }

  drawLandmarks (p, canvas = this.canvas) {
    if (p.length === 0) return
    const ctx = canvas.getContext('2d')
    const points = rotatePoints3D(p.map(el => [...el]), this.kps, this.rotateX, this.rotateY, this.rotateZ)

    ctx.beginPath()
    ctx.strokeStyle  = "white"

    for (let i = 1; i < points.length; i++) {
        ctx.moveTo(points[i - 1][0], points[i - 1][1]);
        ctx.lineTo(points[i][0], points[i][1]);
    }
    ctx.stroke()
  }

  drawKeyPoints (canvas = this.canvas) {
    const kps = rotatePoints3D(this.kps.map(el => [...el]), this.kps, this.rotateX, this.rotateY, this.rotateZ)
    drawKps(canvas, kps, this.kpsOpacities)
  }
}

================================================
FILE: ui/extension.js
================================================
import { app } from "../../scripts/app.js";
import { api } from "../../scripts/api.js";
import { drawKps, normalizePoints, rotatePoints3D, getImgFromInput, getDefaultKpsData } from "./helpers.js"
import { KPSDialog2d, KPSDialog3d } from "./dialogs.js"


app.registerExtension({
  getCustomWidgets(app) {
    return {
      HIDDEN_STRING_JSON(node, inputName, inputData) {
        const widget = {
            type: inputData[0],
            name: inputName,
            async serializeValue() {
              return JSON.stringify(widget.value)
            }
        }
        node.addCustomWidget(widget)
        return  widget
      }
    }
  },
  name: "ComfyUI.instantid-faceswap",
  async beforeRegisterNodeDef(nodeType, nodeData, app) {
    if (nodeType.comfyClass == "KpsDraw" || nodeType.comfyClass == "Kps3dFromImage") {
      nodeType.prototype.showImage = function () {
        let w = this.widgets.find(w => w.name === "width").value
        let h = this.widgets.find(w => w.name === "height").value

        if (w > 0 && h > 0) {
          let kpsWidget = this.widgets.find(w => w.name === "kps")
          let kps = kpsWidget.value.array
          let kps_opacities = kpsWidget.value.opacities

          if (kps?.length === 0) {
            try {
              const parsed_kps = JSON.parse(kpsWidget.value)
              kps = parsed_kps.array
              if (parsed_kps.opacities && parsed_kps.opacities.length) {
                kps_opacities = parsed_kps.opacities
              }
            } catch(e) {
              console.log(e)
              return;
            }
          }
          if (kps) {
            const c = document.createElement("canvas")
            c.width = w
            c.height = h
            if (kpsWidget.value.rotateX || kpsWidget.value.rotateY || kpsWidget.value.rotateZ) {
              kps = rotatePoints3D(kps.map(el => [...el]), kps, kpsWidget.value.rotateX, kpsWidget.value.rotateY, kpsWidget.value.rotateZ)
            }
            drawKps(c, kps, kps_opacities)
            const image = new Image()
            image.src = c.toDataURL()
            this.imgs = [image]
            this.setSizeForImage()
            app.graph.setDirtyCanvas(true)
          }
        }
      }

      const onNodeCreated = nodeType.prototype.onNodeCreated;
      nodeType.prototype.onNodeCreated = function() {
        const r = onNodeCreated ? onNodeCreated.apply(this, arguments) : undefined;
        this.kpsJsonWidget = this.widgets.find(w => w.name === "kps")
        this.kpsJsonWidget.callback = this.showImage.bind(this)
        if (this.kpsJsonWidget.value == null) {
          this.kpsJsonWidget.value = getDefaultKpsData()
        }

        requestAnimationFrame(() => {
          if (this.kpsJsonWidget.value?.array?.length) {
            this.showImage();
          }
        })

        const angleWidget = this.addWidget("string", "angle", "", () => {})

        if (nodeType.comfyClass == "Kps3dFromImage") {

          const div = document.createElement("div")
          div.style.fontSize = "12px"
          div.style.backgroundColor = "#323334"
          div.style.padding = "8px"
          div.innerText = "";
          
          this.addDOMWidget("info_text2", "", div, {getMaxHeight: () => 50})
          
          const doMagic = this.addWidget("button", "getKPS", "", () => {
            const inputNode = getImgFromInput(this.getInputNode(0))

            const reference_image = inputNode.imgs[inputNode.imageIndex || 0].currentSrc
            div.style.color = "white"
            div.innerText = "Getting landmarks ..."

            doMagic.disabled = true
            openDialogWidget.disabled = true
            api.fetchApi("/get_keypoints_for_instantId", {
                method: "POST",
                headers: {
                  "Content-Type": "application/json",
                },
                body: JSON.stringify({image: reference_image})
              }).then(async (data) => {
                const json = await data.json()
                if (json.error) {
                  throw Error(json.error)
                }
                const normalizedPoints = normalizePoints(
                  [
                    ...json.data.jawline,
                    ...json.data.eyebrow_left,
                    ...json.data.eyebrow_right,
                    ...json.data.nose_bridge,
                    ...json.data.nose_lower,
                    ...json.data.eye_left,
                    ...json.data.eye_right,
                    ...json.data.mouth_outer,
                    ...json.data.mouth_inner,
                    ...json.data.kps
                  ],
                  this.widgets.find(w => w.name === "width").value, this.widgets.find(w => w.name === "height").value
                )

                this.kpsJsonWidget.value = getDefaultKpsData()

                this.kpsJsonWidget.value.jawline = normalizedPoints.slice(0, 17)
                this.kpsJsonWidget.value.eyebrow_left = normalizedPoints.slice(17, 22)
                this.kpsJsonWidget.value.eyebrow_right = normalizedPoints.slice(22, 27)
                this.kpsJsonWidget.value.nose_bridge = normalizedPoints.slice(27, 31)
                this.kpsJsonWidget.value.nose_lower = normalizedPoints.slice(31, 36)
                this.kpsJsonWidget.value.eye_left = normalizedPoints.slice(36, 42)
                this.kpsJsonWidget.value.eye_right = normalizedPoints.slice(42, 48)
                this.kpsJsonWidget.value.mouth_outer = normalizedPoints.slice(48, 60)
                this.kpsJsonWidget.value.mouth_inner = normalizedPoints.slice(60, 68)

                this.kpsJsonWidget.value.array = [
                  normalizedPoints[normalizedPoints.length - 5],
                  normalizedPoints[normalizedPoints.length - 4],
                  normalizedPoints[normalizedPoints.length - 3],
                  normalizedPoints[normalizedPoints.length - 2],
                  normalizedPoints[normalizedPoints.length - 1]
                ]
                this.kpsJsonWidget.value.width = this.widgets.find(w => w.name === "width").value
                this.kpsJsonWidget.value.height = this.widgets.find(w => w.name === "height").value

                this.kpsJsonWidget.value.defaultKpsData = JSON.stringify(this.kpsJsonWidget.value) 
                div.style.color = "#08a85a"
                div.innerText = "Success"

                this.showImage()

              }).catch(e => {
                div.style.color = "#C70039"
                div.innerText = "ERROR"
                div.innerText = e.message || "ERROR"
                console.log(e)
              }).finally(() => {
                doMagic.disabled = false
                openDialogWidget.disabled = false
              })
          })
          doMagic.label = "Get Kps From Image";
        }


        const openDialogWidget = this.addWidget("button", "drawbtn", "", () => {
          let w = this.widgets.find(w => w.name === "width").value
          let h = this.widgets.find(w => w.name === "height").value
          let reference_image
          const inputNode = getImgFromInput(this.getInputNode(0))
          if (inputNode?.imgs?.length && nodeType.comfyClass != "Kps3dFromImage") {
            reference_image = inputNode.imgs[inputNode.imageIndex || 0]
            w = reference_image.width
            h = reference_image.height
          }

          if (w > 0 && h > 0) {
            if (nodeType.comfyClass == "Kps3dFromImage") {
              new KPSDialog3d(w, h, angleWidget,  this.kpsJsonWidget)
            } else {
              new KPSDialog2d(
                w, h, reference_image, angleWidget, this.kpsJsonWidget
              )
            }
          }
        });

        const buttonText = nodeType.comfyClass === "KpsDraw" ? "draw kps" : "change kps"

        openDialogWidget.label = buttonText
        angleWidget.label = "angle: "
        angleWidget.value = "none"
        angleWidget.disabled = true
      }
      this.serialize = true

      const onConnectionsChange = nodeType.prototype.onConnectionsChange;
      nodeType.prototype.onConnectionsChange = function (side, slot, connect, link_info, output) {
        const r = onConnectionsChange?.apply(this, arguments);

        if (output.name === "image_reference" && nodeType.comfyClass == "KpsDraw") {
          const widthWidget = this.widgets.find(w => w.name === "width");
          const heightWidget = this.widgets.find(w => w.name === "height");
          const angleWidget = this.widgets.find(w => w.name === "angle");

          this.imgs = []
          if (output.link) {
            widthWidget.disabled = true
            heightWidget.disabled = true
            const inputNode = getImgFromInput(this.getInputNode(0))
            if (inputNode?.imgs?.length) {
              const reference_image = inputNode.imgs[inputNode.imageIndex || 0]
              widthWidget.value = reference_image.width
              heightWidget.value = reference_image.height
            }
          } else {
            widthWidget.disabled = false
            heightWidget.disabled = false
          }

          if (angleWidget) {
            angleWidget.value = "none"
          }
        }

        if (output.name === "image" && nodeType.comfyClass == "Kps3dFromImage") {
          this.imgs = []
          const getKPSWidget = this.widgets.find(w => w.name === "getKPS")
          getKPSWidget.disabled = !!!output.link
        }
        return r;
      }
    }
  }
})

================================================
FILE: ui/helpers.js
================================================
export const getPointsCenter = (points) => {
  let sumX = 0, sumY = 0, sumZ = 0;
  points.forEach(([x, y, z]) => {
      sumX += x;
      sumY += y;
      if (z != null) sumZ += z
  });

  const ret = [sumX / points.length, sumY / points.length]
  if (points[0].length > 2) ret.push(sumZ / points.length)
  return ret
}

export const getPoinsMinMax = (points) => {
  let minX = points[0][0], maxX = points[0][0];
  let minY = points[0][1], maxY = points[0][1];
  points.forEach(([x, y]) => {
      if (x < minX) minX = x;
      if (x > maxX) maxX = x;
      if (y < minY) minY = y;
      if (y > maxY) maxY = y;
  });
  return { minX, maxX, minY, maxY };
}

export const drawKps = (canvas, kps, opacities) => {
  const color_list = [
    `255, 0, 0,`,
    `0, 255, 0,`,
    `0, 0, 255,`,
    `255, 255, 0,`,
    `255, 0, 255,`
  ]

    const ctx = canvas.getContext("2d")
  const stickWidth = 10;
  const limbSeq = [[0, 2], [1, 2], [3, 2], [4, 2]]

  ctx.clearRect(0, 0, canvas.width, canvas.height)
  ctx.fillStyle = "black"
  ctx.fillRect(0, 0, canvas.width, canvas.height)
  ctx.save()
  limbSeq.forEach((limb, idx) => {
    const kp1 = kps[limb[0]]
    const kp2 = kps[limb[1]]
    const color = `rgba( ${color_list[limb[0]]} ${0.6 * opacities[limb[0]]})`

    const x = [kp1[0], kp2[0]];
    const y = [kp1[1], kp2[1]];
    const length = Math.sqrt((x[0] - x[1]) ** 2 + (y[0] - y[1]) ** 2)
    const angle = Math.atan2(y[1] - y[0], x[1] - x[0])

    const num_points = 20;
    const polygon = []

    const midX = (x[0] + x[1]) / 2
    const midY = (y[0] + y[1]) / 2

    for (let i = 0; i <= num_points; i++) {
      const theta = (i / num_points) * Math.PI * 2
      const dx = (length / 2) * Math.cos(theta);
      const dy = (stickWidth / 2) * Math.sin(theta);
      const rx = Math.cos(angle) * dx - Math.sin(angle) * dy + midX
      const ry = Math.sin(angle) * dx + Math.cos(angle) * dy + midY
      polygon.push([rx, ry]);
    }

    ctx.beginPath();
    ctx.moveTo(polygon[0][0], polygon[0][1])
    for (let i = 1; i < polygon.length; i++) {
      ctx.lineTo(polygon[i][0], polygon[i][1])
    }
    ctx.closePath();
    ctx.fillStyle = color;
    ctx.fill();
  })

  kps.forEach((kp, idx) => {
    const [x, y] = kp;
    const color = `rgba( ${color_list[idx]} ${opacities[idx]})`
    ctx.beginPath();
    ctx.arc(x, y, 10, 0, Math.PI * 2);
    ctx.fillStyle = color;
    ctx.fill();
  });
  ctx.restore();
}

export const checkWebGlSupport = () => {
  const canvas = document.createElement("canvas");
  const gl = canvas.getContext("webgl2")
  return !!gl
}

export const normalizePoints = (points, w, h) => {
  const minValues = [
    Math.min(...points.map(p => p[0])),
    Math.min(...points.map(p => p[1])),
    Math.min(...points.map(p => p[2]))
  ];

  const maxValues = [
    Math.max(...points.map(p => p[0])),
    Math.max(...points.map(p => p[1])),
    Math.max(...points.map(p => p[2]))
  ];

  const ranges = [
    maxValues[0] - minValues[0], 
    maxValues[1] - minValues[1],
    maxValues[2] - minValues[2] 
  ];

  const scaleX = w / ranges[0]
  const scaleY = h / ranges[1]

  const scaleFactor = Math.min(scaleX, scaleY);

  const normalizedPoints = points.map(point => [
    (point[0] - minValues[0]) * scaleFactor,
    (point[1] - minValues[1]) * scaleFactor,
    (point[2] - minValues[2]) * scaleFactor
  ]);

  const maxNormalizedValues = [
    Math.max(...normalizedPoints.map(p => p[0])),
    Math.max(...normalizedPoints.map(p => p[1])),
    Math.max(...normalizedPoints.map(p => p[2]))
  ];

  const centerOffset = [
    (w - maxNormalizedValues[0]) / 2,
    (h - maxNormalizedValues[1]) / 2,
    -maxNormalizedValues[2] / 2
  ];

  const centeredPoints = normalizedPoints.map(point => [
    point[0] + centerOffset[0],
    point[1] + centerOffset[1],
    point[2] + centerOffset[2]
  ]);

  return centeredPoints;
}

export const rotatePoints3D = (points, kps, angleXDeg, angleYDeg, angleZDeg) => {
  const angleX = angleXDeg * (Math.PI / 180);
  const angleY = angleYDeg * (Math.PI / 180);
  const angleZ = angleZDeg * (Math.PI / 180);

  const numPoints = kps.length;
  const center = kps.reduce((acc, point) => {
    acc[0] += point[0]
    acc[1] += point[1]
    acc[2] += point[2]
    return acc;
  }, [0, 0, 0]).map(coord => coord / numPoints)

  const translatedPoints = points.map(point => [
    point[0] - center[0],
    point[1] - center[1],
    point[2] - center[2]
  ]);

  function rotateX(point, angle) {
    const cosTheta = Math.cos(angle);
    const sinTheta = Math.sin(angle);
    return [
      point[0],
      point[1] * cosTheta - point[2] * sinTheta,
      point[1] * sinTheta + point[2] * cosTheta
    ];
  }

  function rotateY(point, angle) {
    const cosTheta = Math.cos(angle);
    const sinTheta = Math.sin(angle);
    return [
      point[0] * cosTheta + point[2] * sinTheta,
      point[1],
      -point[0] * sinTheta + point[2] * cosTheta
    ];
  }

  function rotateZ(point, angle) {
    const cosTheta = Math.cos(angle);
    const sinTheta = Math.sin(angle);
    return [
      point[0] * cosTheta - point[1] * sinTheta,
      point[0] * sinTheta + point[1] * cosTheta,
      point[2]
    ];
  }

  const rotatedPoints = translatedPoints.map(point => {
    let rotatedPoint = rotateX(point, angleX)
    rotatedPoint = rotateY(rotatedPoint, angleY)
    rotatedPoint = rotateZ(rotatedPoint, angleZ)
    return rotatedPoint;
  });

  const finalPoints = rotatedPoints.map(point => [
    point[0] + center[0],
    point[1] + center[1],
    point[2] + center[2]
  ])

  return finalPoints
}

export const getImgFromInput = (inputNode) => {
  if (inputNode?.type === "Reroute") {
    return getImgFromInput(inputNode.getInputNode(0))
  }
  return inputNode
} 

export const getDefaultKpsData = () => ({
    array: [], height: 0, width: 0, rotateX: 0, rotateY: 0, rotateZ: 0 , opacities: [1, 1, 1, 1, 1],
    jawline: [], eyebrow_left: [], eyebrow_right: [], nose_bridge: [], nose_lower: [], 
    eye_left: [], eye_right: [], mouth_outer: [], mouth_inner: []
})

================================================
FILE: ui/shaders.js
================================================
export const vertexShaderSrc = `#version 300 es
#pragma vscode_glsllint_stage: vert
layout(location=0) in vec4 aPosition;
layout(location=1) in vec2 aTexCoord;
out vec2 vTexCoord;
void main()
{
  vTexCoord = aTexCoord;
    gl_Position = aPosition;
}`;

export const fragmentShaderSrc = `#version 300 es
#pragma vscode_glsllint_stage: frag
precision mediump float;
in vec2 vTexCoord;
uniform sampler2D uSampler;
out vec4 fragColor;
void main()
{
    fragColor = texture(uSampler, vTexCoord);
}`;


export const createShader = (gl, type, source) => {
  const shader = gl.createShader(type)
  gl.shaderSource(shader, source)
  gl.compileShader(shader)
  if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
    console.error(gl.getShaderInfoLog(shader))
    gl.deleteShader(shader)
    return null
  }
  return shader
}

================================================
FILE: ui/uiHelpers.js
================================================
export const createSlider = (name, id, min, max, step, value, callback) => {
    const divElement = document.createElement("div");
    divElement.id = id;
    divElement.style.cssFloat = "left"
    divElement.style.fontFamily = "sans-serif"
    divElement.style.marginRight = "4px"
    divElement.style.color = "var(--input-text)"
    divElement.style.backgroundColor = "var(--comfy-input-bg)"
    divElement.style.borderRadius = "8px"
    divElement.style.borderColor = "var(--border-color)"
    divElement.style.borderStyle = "solid"
    divElement.style.fontSize = "15px"
    divElement.style.height = "21px"
    divElement.style.padding = "1px 6px"
    divElement.style.display = "flex"
    divElement.style.position = "relative"
    divElement.style.top = "2px"
    divElement.style.pointerEvents = "auto"

    const input = document.createElement("input")
    const labelElement = document.createElement("label")
    input.setAttribute("type", "range")
    input.setAttribute("min", `${min}`)
    input.setAttribute("max", `${max}`)
    input.setAttribute("step", `${step}`)
    input.setAttribute("value", `${value}`)
    labelElement.textContent = name;
    divElement.appendChild(labelElement)
    divElement.appendChild(input)
    input.addEventListener("input", callback)
    return divElement;
  }

  export const createButton = (name, isRight, callback) => {
    const button = document.createElement("button");
    button.innerText = name;
    button.style.pointerEvents = "auto";
    button.addEventListener("click", callback);
    if (isRight) {
      button.style.cssFloat = "right";
      button.style.marginLeft = "4px";
    } else {
      button.style.cssFloat = "left";
      button.style.marginRight = "4px";
    }
    return button;
  }

  export const createRadiobox = (name, label, opacities, index, callback) => {
    const div = document.createElement("div");
    div.style.marginTop = "20px"
    const sliderInput = document.createElement("input")
    sliderInput.style.pointerEvents = "auto"
    sliderInput.id = `opacity_slider_${name}`
    sliderInput.type = "range"
    sliderInput.step = "0.1"
    sliderInput.min = "0"
    sliderInput.max = "1"
    sliderInput.tabIndex = "1"
    sliderInput.style.width = "100%"
    sliderInput.name = `s_${name}`
    sliderInput.value = opacities[index]
    sliderInput.addEventListener("change", (event) => {
      const input = document.querySelector(`#opacity_input_${name}`)
      if (input) input.value = event.target.value;
      opacities[index] = event.target.value;
      callback()
    })

    const valueInput = document.createElement("input")
    valueInput.style.pointerEvents = "auto"
    valueInput.id = `opacity_input_${name}`
    valueInput.type = "number";
    valueInput.min = "0";
    valueInput.max = "1";
    valueInput.tabIndex = "1"
    valueInput.style.width = "100%"
    valueInput.name = `i_${name}`
    valueInput.value = opacities[index];
    valueInput.addEventListener("change", (event) => {
      const input = document.querySelector(`#opacity_slider_${name}`)
      if (input) input.value = event.target.value
      opacities[index] = event.target.value
      callback()
    })

    div.style.marginRight = "4px";
    const labelDiv = document.createElement("div")
    labelDiv.innerText = label
    div.appendChild(labelDiv)
    div.appendChild(sliderInput)
    div.appendChild(valueInput)
    return div
  }

================================================
FILE: utils.py
================================================
import numpy as np
import cv2
import math
import torch
import math
import torch.nn.functional as F
from torchvision.transforms import functional as TF
from .ip_adapter.instantId import CrossAttentionPatch

def draw_kps(w, h, kps, color_list=[(255, 0, 0), (0, 255, 0), (0, 0, 255), (255, 255, 0), (255, 0, 255)], alphas=[1, 1, 1, 1, 1]):
    stickwidth = 4
    limbSeq = np.array([[0, 2], [1, 2], [3, 2], [4, 2]])
    kps = np.array(kps)
    out_img = np.zeros([int(h), int(w), 3], dtype=np.uint8)

    for i in range(len(limbSeq)):
        index = limbSeq[i]
        color = color_list[index[0]]
        alpha = alphas[index[0]]

        x = kps[index][:, 0]
        y = kps[index][:, 1]
        length = ((x[0] - x[1]) ** 2 + (y[0] - y[1]) ** 2) ** 0.5
        angle = math.degrees(math.atan2(y[0] - y[1], x[0] - x[1]))

        polygon = cv2.ellipse2Poly(
            (int(np.mean(x)), int(np.mean(y))), (int(length / 2), stickwidth), int(angle), 0, 360, 1
        )

        limb_img = np.zeros_like(out_img)
        cv2.fillConvexPoly(limb_img, polygon, color)
        out_img = cv2.addWeighted(out_img, 1, limb_img, float(alpha) * 0.6, 0)

    for idx_kp, kp in enumerate(kps):
        color = color_list[idx_kp]
        alpha = alphas[idx_kp] 
        x = kp[0]
        y = kp[1]
        kp_img = out_img.copy()
        cv2.circle(kp_img, (int(x), int(y)), 10, color, -1)
        out_img = cv2.addWeighted(out_img, 1 - float(alpha), kp_img, float(alpha), 0)

    return out_img.astype(np.uint8)


# based on https://github.com/laksjdjf/IPAdapter-ComfyUI/blob/main/ip_adapter.py#L19
def set_model_patch_replace(model, patch_kwargs, key):
  attn = "attn2"
  to = model.model_options["transformer_options"].copy()
  if "patches_replace" not in to:
    to["patches_replace"] = {}
  else:
    to["patches_replace"] = to["patches_replace"].copy()

  if attn not in to["patches_replace"]:
    to["patches_replace"][attn] = {}
  else:
    to["patches_replace"][attn] = to["patches_replace"][attn].copy()
  if key not in to["patches_replace"][attn]:
    to["patches_replace"][attn][key] = CrossAttentionPatch(**patch_kwargs)
    model.model_options["transformer_options"] = to
  else:
    to["patches_replace"][attn][key].set_new_condition(**patch_kwargs)


def resize_to_fit_area(original_width, original_height, area_width, area_height):
  base_pixels = 8
  max_area= area_width * area_height
  aspect_ratio = original_width / original_height

  scale_factor = math.sqrt(max_area / (original_width * original_height))
  new_width = int(original_width * scale_factor)
  new_height = int(original_height * scale_factor)

  new_width = new_width // base_pixels * base_pixels
  new_height = new_height // base_pixels * base_pixels

  if new_width * new_height > max_area:
      new_width = math.floor(math.sqrt(max_area * aspect_ratio)) // base_pixels * base_pixels
      new_height = math.floor(new_width / aspect_ratio) // base_pixels * base_pixels

  return (new_width, new_height)


def get_mask_bbox_with_padding(mask_image, pad_top, pad_right, pad_bottom, pad_left):
  mask_segments = torch.nonzero(mask_image == 1, as_tuple=False)
  if torch.count_nonzero(mask_segments).item() == 0:
    raise Exception("Draw a mask on pose image")

  m_y1 = torch.min(mask_segments[:, 0]).item()
  m_y2 = torch.max(mask_segments[:, 0]).item()
  m_x1 = torch.min(mask_segments[:, 1]).item()
  m_x2 = torch.max(mask_segments[:, 1]).item()

  height, width = mask_image.shape

  p_x1 = max(0, m_x1 - pad_left)
  p_y1 = max(0, m_y1 - pad_top)
  p_x2 = min(width, m_x2 + pad_right)
  p_y2 = min(height, m_y2 + pad_bottom)

  return int(p_x1), int(p_y1), int(p_x2), int(p_y2)


def get_kps_from_image(image, insightface):
  np_pose_image = (255.0 * image.cpu().numpy().squeeze()).clip(0, 255).astype(np.uint8)
  face_info = insightface.get(cv2.cvtColor(np_pose_image, cv2.COLOR_RGB2BGR))
  assert len(face_info) > 0, "No face detected in pose image"
  face_info = sorted(face_info, key=lambda x: (x["bbox"][2] - x["bbox"][0]) * (x["bbox"][3] - x["bbox"][1]))[-1] # only use the maximum face
  return face_info["kps"]


def get_angle(a=(0, 0), b=(0, 0), round_angle=False):
    # a, b - eyes
    angle = math.atan2(b[1] - a[1], b[0] - a[0]) * 180 / math.pi
    if round_angle:
        angle = round(angle / 90) * 90
        if angle == 360: angle = 0

    return angle


def calculate_size_after_rotation(width, height, angle):
    angle_rad = math.radians(angle)
    
    new_width = abs(width * math.cos(angle_rad)) + abs(height * math.sin(angle_rad))
    new_height = abs(width * math.sin(angle_rad)) + abs(height * math.cos(angle_rad))
    
    return (int(np.ceil(new_width)), int(np.ceil(new_height))) #+ 1?


def image_rotate_with_pad(image, clockwise, angle):
  if not clockwise: angle *= -1

  image = image.squeeze(0)
  image = image.permute(2, 0, 1)
  image = TF.rotate(image, angle, fill=0, expand=True)
  image = image.permute(1, 2, 0)
  image = image.unsqueeze(0)
  return image


def kps_rotate_2d(points, original_width, original_height, new_width, new_height, angle):
    angle_rad = math.radians(angle)
    
    original_center_x = original_width / 2
    original_center_y = original_height / 2
    
    new_center_x = new_width / 2
    new_center_y = new_height / 2
    
    cos_angle = math.cos(angle_rad)
    sin_angle = math.sin(angle_rad)
    
    rotated_points = []
    
    for point in points:
        x, y = point
        
        translated_x = x - original_center_x
        translated_y = y - original_center_y
        
        rotated_x = translated_x * cos_angle - translated_y * sin_angle
        rotated_y = translated_x * sin_angle + translated_y * cos_angle
        
        final_x = int(round(rotated_x + new_center_x))
        final_y = int(round(rotated_y + new_center_y))
        
        rotated_points.append([final_x, final_y])
    
    return rotated_points


def kps_rotate_3d(points, angleXDeg, angleYDeg, angleZDeg):
    angleX = math.radians(angleXDeg)
    angleY = math.radians(angleYDeg)
    angleZ = math.radians(angleZDeg)

    center = np.mean(points, axis=0)

    translated_points = np.array([point - center for point in points])

    def rotate_x(point, angle):
        cos_theta = math.cos(angle)
        sin_theta = math.sin(angle)
        return [
            int(point[0]),
            int(point[1] * cos_theta - point[2] * sin_theta),
            int(point[1] * sin_theta + point[2] * cos_theta)
        ]

    def rotate_y(point, angle):
        cos_theta = math.cos(angle)
        sin_theta = math.sin(angle)
        return [
            int(point[0] * cos_theta + point[2] * sin_theta),
            int(point[1]),
            int(-point[0] * sin_theta + point[2] * cos_theta)
        ]

    def rotate_z(point, angle):
        cos_theta = math.cos(angle)
        sin_theta = math.sin(angle)
        return [
            int(point[0] * cos_theta - point[1] * sin_theta),
            int(point[0] * sin_theta + point[1] * cos_theta),
            int(point[2])
        ]

    rotated_points = [
        rotate_z(rotate_y(rotate_x(point, angleX), angleY), angleZ)
        for point in translated_points
    ]

    return [point + center for point in rotated_points]


def kps3d_to_kps2d (kps):
  if len(kps['array'][0]) == 3:
    kps2d = {
      'width': kps['width'],
      'height': kps['height'],
      'opacities': kps['opacities'][:],
      'array': []
    }

    for x, y, _ in kps['array']:
       kps2d['array'].append([x, y])

    return kps2d
  return kps


def get_bbox_from_kps (kps_data, grow_by):
  kps = np.array(kps_data['array'])
  minX, minY = np.min(kps, axis=0)
  maxX, maxY = np.max(kps, axis=0)
  width = (maxX - minX) / grow_by
  height = ((maxY - minY) / grow_by)

  return [
    [
      int(max(np.ceil(minX - (width)), 0)),
      int(max(np.ceil(minY - (height)), 0))
    ],
    [
      int(min(np.ceil(maxX + (width)), kps_data['width'])),
      int(min(np.ceil(maxY + (height)), kps_data['height']))
    ]   
  ]

================================================
FILE: workflows/auto_rotate.json
================================================
{
  "last_node_id": 682,
  "last_link_id": 1593,
  "nodes": [
    {
      "id": 268,
      "type": "PreviewImage",
      "pos": [
        1260,
        -340
      ],
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 35,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 926
        }
      ],
      "title": "Image for inpaint",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 412,
      "type": "Reroute",
      "pos": [
        1201,
        144
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 32,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1475,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            840
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 413,
      "type": "Reroute",
      "pos": [
        1201,
        164
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 33,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1476,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            841
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 395,
      "type": "PreviewImage",
      "pos": [
        1440,
        430
      ],
      "size": {
        "0": 612.2093505859375,
        "1": 842.1597900390625
      },
      "flags": {},
      "order": 50,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 1535
        }
      ],
      "title": "Output Image",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 630,
      "type": "SetLatentNoiseMask",
      "pos": [
        894,
        510
      ],
      "size": {
        "0": 210,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 43,
      "mode": 0,
      "inputs": [
        {
          "name": "samples",
          "type": "LATENT",
          "link": 1419
        },
        {
          "name": "mask",
          "type": "MASK",
          "link": 1421
        }
      ],
      "outputs": [
        {
          "name": "LATENT",
          "type": "LATENT",
          "links": [
            1420
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "SetLatentNoiseMask"
      }
    },
    {
      "id": 354,
      "type": "VAEEncode",
      "pos": [
        751,
        510
      ],
      "size": {
        "0": 309.7555847167969,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 34,
      "mode": 0,
      "inputs": [
        {
          "name": "pixels",
          "type": "IMAGE",
          "link": 923
        },
        {
          "name": "vae",
          "type": "VAE",
          "link": 1024
        }
      ],
      "outputs": [
        {
          "name": "LATENT",
          "type": "LATENT",
          "links": [
            1419
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "VAEEncode"
      }
    },
    {
      "id": 474,
      "type": "Reroute",
      "pos": [
        416,
        478
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 11,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1026
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "VAE",
          "links": [
            1024,
            1027,
            1593
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 248,
      "type": "VAEDecode",
      "pos": [
        1438,
        323
      ],
      "size": {
        "0": 140,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 45,
      "mode": 0,
      "inputs": [
        {
          "name": "samples",
          "type": "LATENT",
          "link": 1340
        },
        {
          "name": "vae",
          "type": "VAE",
          "link": 768
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            797
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "VAEDecode"
      }
    },
    {
      "id": 392,
      "type": "Reroute",
      "pos": [
        756,
        293
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 19,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1027
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "VAE",
          "links": [
            768
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 262,
      "type": "MaskToImage",
      "pos": [
        1240,
        20
      ],
      "size": {
        "0": 210,
        "1": 26
      },
      "flags": {
        "collapsed": true
      },
      "order": 36,
      "mode": 0,
      "inputs": [
        {
          "name": "mask",
          "type": "MASK",
          "link": 828,
          "slot_index": 0
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            408,
            558
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "MaskToImage"
      }
    },
    {
      "id": 323,
      "type": "ImageToMask",
      "pos": [
        1790,
        20
      ],
      "size": {
        "0": 315,
        "1": 58
      },
      "flags": {
        "collapsed": true
      },
      "order": 41,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 563
        }
      ],
      "outputs": [
        {
          "name": "MASK",
          "type": "MASK",
          "links": [
            1421
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageToMask"
      },
      "widgets_values": [
        "red"
      ]
    },
    {
      "id": 264,
      "type": "PreviewImage",
      "pos": [
        1500,
        -340
      ],
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 39,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 408
        }
      ],
      "title": "Mask",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 410,
      "type": "Reroute",
      "pos": [
        1118,
        104
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 30,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1473,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            836
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 411,
      "type": "Reroute",
      "pos": [
        1118,
        124
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 31,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1474,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            838
          ]
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 369,
      "type": "PreviewImage",
      "pos": [
        1020,
        -340
      ],
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 37,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 1052
        }
      ],
      "title": "InstantId Control Image",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 326,
      "type": "PreviewImage",
      "pos": [
        1740,
        -340
      ],
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 42,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 590
        }
      ],
      "title": "Blurred Mask",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 408,
      "type": "Reroute",
      "pos": [
        826,
        -9
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 28,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1504
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "MASK",
          "links": [
            828
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 258,
      "type": "PreviewImage",
      "pos": [
        2100,
        433
      ],
      "size": {
        "0": 612.114013671875,
        "1": 845.9668579101562
      },
      "flags": {},
      "order": 13,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 1552
        }
      ],
      "title": "Input Image",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 394,
      "type": "ImageScale",
      "pos": [
        1607,
        197
      ],
      "size": {
        "0": 315,
        "1": 130
      },
      "flags": {
        "collapsed": true
      },
      "order": 46,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 797
        },
        {
          "name": "width",
          "type": "INT",
          "link": 840,
          "widget": {
            "name": "width"
          }
        },
        {
          "name": "height",
          "type": "INT",
          "link": 841,
          "widget": {
            "name": "height"
          }
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            781
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageScale"
      },
      "widgets_values": [
        "bilinear",
        512,
        512,
        "disabled"
      ]
    },
    {
      "id": 667,
      "type": "ComposeRotated",
      "pos": [
        2285,
        73
      ],
      "size": {
        "0": 254.40000915527344,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 49,
      "mode": 0,
      "inputs": [
        {
          "name": "original_image",
          "type": "IMAGE",
          "link": 1533
        },
        {
          "name": "rotated_image",
          "type": "IMAGE",
          "link": 1538
        }
      ],
      "outputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "links": [
            1535
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ComposeRotated"
      },
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 664,
      "type": "MaskToImage",
      "pos": [
        -298,
        89
      ],
      "size": {
        "0": 210,
        "1": 26
      },
      "flags": {
        "collapsed": true
      },
      "order": 8,
      "mode": 0,
      "inputs": [
        {
          "name": "mask",
          "type": "MASK",
          "link": 1530
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            1563
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "MaskToImage"
      }
    },
    {
      "id": 579,
      "type": "Reroute",
      "pos": [
        1973,
        43
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 22,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1544
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 569,
      "type": "Reroute",
      "pos": [
        206,
        313
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 23,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1546
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1314
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 389,
      "type": "Reroute",
      "pos": [
        1708,
        311
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 25,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1314
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1032
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 396,
      "type": "ImageCompositeMasked",
      "pos": [
        1838,
        149
      ],
      "size": {
        "0": 327.45550537109375,
        "1": 140.86239624023438
      },
      "flags": {
        "collapsed": true
      },
      "order": 47,
      "mode": 0,
      "inputs": [
        {
          "name": "destination",
          "type": "IMAGE",
          "link": 1032
        },
        {
          "name": "source",
          "type": "IMAGE",
          "link": 781
        },
        {
          "name": "mask",
          "type": "MASK",
          "link": null
        },
        {
          "name": "x",
          "type": "INT",
          "link": 836,
          "widget": {
            "name": "x"
          }
        },
        {
          "name": "y",
          "type": "INT",
          "link": 838,
          "widget": {
            "name": "y"
          }
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            1536
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageCompositeMasked"
      },
      "widgets_values": [
        0,
        0,
        false
      ]
    },
    {
      "id": 672,
      "type": "Reroute",
      "pos": [
        207,
        402
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 7,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1551
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1552
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 650,
      "type": "LoadInsightface",
      "pos": [
        -870,
        -130
      ],
      "size": {
        "0": 210,
        "1": 26
      },
      "flags": {},
      "order": 0,
      "mode": 0,
      "outputs": [
        {
          "name": "insightface",
          "type": "INSIGHTFACE_APP",
          "links": [
            1491,
            1555,
            1564
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "LoadInsightface"
      },
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 655,
      "type": "Reroute",
      "pos": [
        1960,
        -290
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 16,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1565,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "FLOAT",
          "links": [
            1566
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 407,
      "type": "Reroute",
      "pos": [
        1118,
        44
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 27,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1567
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            923,
            926
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 669,
      "type": "RotateImage",
      "pos": [
        -76,
        88
      ],
      "size": {
        "0": 315,
        "1": 82
      },
      "flags": {
        "collapsed": true
      },
      "order": 15,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1563
        },
        {
          "name": "angle",
          "type": "FLOAT",
          "link": 1559,
          "widget": {
            "name": "angle"
          }
        }
      ],
      "outputs": [
        {
          "name": "rotated_image",
          "type": "IMAGE",
          "links": [
            1573
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "RotateImage"
      },
      "widgets_values": [
        30,
        true
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 670,
      "type": "RotateImage",
      "pos": [
        -76,
        67
      ],
      "size": {
        "0": 315,
        "1": 82
      },
      "flags": {
        "collapsed": true
      },
      "order": 14,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1542
        },
        {
          "name": "angle",
          "type": "FLOAT",
          "link": 1558,
          "widget": {
            "name": "angle"
          }
        }
      ],
      "outputs": [
        {
          "name": "rotated_image",
          "type": "IMAGE",
          "links": [
            1544,
            1545,
            1546
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "RotateImage"
      },
      "widgets_values": [
        30,
        true
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 665,
      "type": "ImageToMask",
      "pos": [
        93,
        88
      ],
      "size": {
        "0": 315,
        "1": 58
      },
      "flags": {
        "collapsed": true
      },
      "order": 24,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1573
        }
      ],
      "outputs": [
        {
          "name": "MASK",
          "type": "MASK",
          "links": [
            1572
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageToMask"
      },
      "widgets_values": [
        "red"
      ]
    },
    {
      "id": 668,
      "type": "RotateImage",
      "pos": [
        2100,
        150
      ],
      "size": {
        "0": 315,
        "1": 82
      },
      "flags": {
        "collapsed": true
      },
      "order": 48,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1536
        },
        {
          "name": "angle",
          "type": "FLOAT",
          "link": 1566,
          "widget": {
            "name": "angle"
          }
        }
      ],
      "outputs": [
        {
          "name": "rotated_image",
          "type": "IMAGE",
          "links": [
            1538
          ],
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "RotateImage"
      },
      "widgets_values": [
        30,
        false
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 647,
      "type": "PreprocessImage",
      "pos": [
        320,
        40
      ],
      "size": {
        "0": 346.7717590332031,
        "1": 290
      },
      "flags": {},
      "order": 26,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1545
        },
        {
          "name": "mask",
          "type": "MASK",
          "link": 1572
        },
        {
          "name": "insightface",
          "type": "INSIGHTFACE_APP",
          "link": 1564
        }
      ],
      "outputs": [
        {
          "name": "resized_image",
          "type": "IMAGE",
          "links": [
            1567
          ],
          "shape": 3
        },
        {
          "name": "mask",
          "type": "MASK",
          "links": [
            1504
          ],
          "shape": 3
        },
        {
          "name": "control_image",
          "type": "IMAGE",
          "links": [
            1562
          ],
          "slot_index": 2,
          "shape": 3
        },
        {
          "name": "x",
          "type": "INT",
          "links": [
            1473
          ],
          "slot_index": 3,
          "shape": 3
        },
        {
          "name": "y",
          "type": "INT",
          "links": [
            1474
          ],
          "slot_index": 4,
          "shape": 3
        },
        {
          "name": "original_width",
          "type": "INT",
          "links": [
            1475
          ],
          "slot_index": 5,
          "shape": 3
        },
        {
          "name": "original_height",
          "type": "INT",
          "links": [
            1476
          ],
          "slot_index": 6,
          "shape": 3
        },
        {
          "name": "new_width",
          "type": "INT",
          "links": [],
          "slot_index": 7,
          "shape": 3
        },
        {
          "name": "new_height",
          "type": "INT",
          "links": [],
          "slot_index": 8,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "PreprocessImage"
      },
      "widgets_values": [
        1024,
        1024,
        "auto",
        150
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 642,
      "type": "FaceEmbed",
      "pos": [
        -580,
        -103
      ],
      "size": {
        "0": 292.20001220703125,
        "1": 66
      },
      "flags": {
        "collapsed": true
      },
      "order": 6,
      "mode": 0,
      "inputs": [
        {
          "name": "insightface",
          "type": "INSIGHTFACE_APP",
          "link": 1491
        },
        {
          "name": "face_image",
          "type": "IMAGE",
          "link": 1460
        },
        {
          "name": "face_embeds",
          "type": "FACE_EMBED",
          "link": null
        }
      ],
      "outputs": [
        {
          "name": "face embeds",
          "type": "FACE_EMBED",
          "links": [
            1576
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "FaceEmbed"
      },
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 636,
      "type": "ControlNetLoader",
      "pos": [
        -650,
        950
      ],
      "size": {
        "0": 315,
        "1": 58
      },
      "flags": {},
      "order": 1,
      "mode": 0,
      "outputs": [
        {
          "name": "CONTROL_NET",
          "type": "CONTROL_NET",
          "links": [
            1582
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ControlNetLoader"
      },
      "widgets_values": [
        "ControlNetModel\\diffusion_pytorch_model.safetensors"
      ],
      "color": "#223",
      "bgcolor": "#335"
    },
    {
      "id": 481,
      "type": "Reroute",
      "pos": [
        823,
        81
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 29,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1562
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1052,
            1583
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 679,
      "type": "FaceEmbedCombine",
      "pos": [
        -36,
        549
      ],
      "size": {
        "0": 367.79998779296875,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 12,
      "mode": 0,
      "inputs": [
        {
          "name": "resampler",
          "type": "RESAMPLER",
          "link": 1577
        },
        {
          "name": "face_embeds",
          "type": "FACE_EMBED",
          "link": 1576
        }
      ],
      "outputs": [
        {
          "name": "face conditioning",
          "type": "FACE_CONDITIONING",
          "links": [
            1587,
            1590
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "FaceEmbedCombine"
      },
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 677,
      "type": "LoadInstantIdAdapter",
      "pos": [
        -642,
        801
      ],
      "size": {
        "0": 315,
        "1": 78
      },
      "flags": {},
      "order": 2,
      "mode": 0,
      "outputs": [
        {
          "name": "InstantId_adapter",
          "type": "INSTANTID_ADAPTER",
          "links": [
            1589
          ],
          "slot_index": 0,
          "shape": 3
        },
        {
          "name": "resampler",
          "type": "RESAMPLER",
          "links": [
            1577
          ],
          "slot_index": 1,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "LoadInstantIdAdapter"
      },
      "widgets_values": [
        "ip-adapter.bin"
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 252,
      "type": "CLIPTextEncode",
      "pos": [
        330,
        785
      ],
      "size": {
        "0": 321.2493896484375,
        "1": 112.86385345458984
      },
      "flags": {},
      "order": 17,
      "mode": 0,
      "inputs": [
        {
          "name": "clip",
          "type": "CLIP",
          "link": 1324
        }
      ],
      "outputs": [
        {
          "name": "CONDITIONING",
          "type": "CONDITIONING",
          "links": [
            1580
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "title": "Positive Prompt",
      "properties": {
        "Node name for S&R": "CLIPTextEncode"
      },
      "widgets_values": [
        ""
      ],
      "color": "#232",
      "bgcolor": "#353"
    },
    {
      "id": 287,
      "type": "CLIPTextEncode",
      "pos": [
        328,
        930
      ],
      "size": {
        "0": 323.7601013183594,
        "1": 111.55984497070312
      },
      "flags": {},
      "order": 18,
      "mode": 0,
      "inputs": [
        {
          "name": "clip",
          "type": "CLIP",
          "link": 1592
        }
      ],
      "outputs": [
        {
          "name": "CONDITIONING",
          "type": "CONDITIONING",
          "links": [
            1581
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "title": "Negative Prompt",
      "properties": {
        "Node name for S&R": "CLIPTextEncode"
      },
      "widgets_values": [
        ""
      ],
      "color": "#322",
      "bgcolor": "#533"
    },
    {
      "id": 680,
      "type": "ControlNetInstantIdApply",
      "pos": [
        737,
        590
      ],
      "size": {
        "0": 330,
        "1": 138
      },
      "flags": {},
      "order": 38,
      "mode": 0,
      "inputs": [
        {
          "name": "positive",
          "type": "CONDITIONING",
          "link": 1580
        },
        {
          "name": "negative",
          "type": "CONDITIONING",
          "link": 1581
        },
        {
          "name": "face_conditioning",
          "type": "FACE_CONDITIONING",
          "link": 1587
        },
        {
          "name": "control_net",
          "type": "CONTROL_NET",
          "link": 1582
        },
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1583
        }
      ],
      "outputs": [
        {
          "name": "positive",
          "type": "CONDITIONING",
          "links": [
            1584
          ],
          "slot_index": 0,
          "shape": 3
        },
        {
          "name": "negative",
          "type": "CONDITIONING",
          "links": [
            1585
          ],
          "slot_index": 1,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ControlNetInstantIdApply"
      },
      "widgets_values": [
        0.7000000000000001
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 681,
      "type": "InstantIdAdapterApply",
      "pos": [
        337,
        574
      ],
      "size": {
        "0": 315,
        "1": 98
      },
      "flags": {},
      "order": 21,
      "mode": 0,
      "inputs": [
        {
          "name": "model",
          "type": "MODEL",
          "link": 1588
        },
        {
          "name": "instantId_adapter",
          "type": "INSTANTID_ADAPTER",
          "link": 1589
        },
        {
          "name": "face_conditioning",
          "type": "FACE_CONDITIONING",
          "link": 1590
        }
      ],
      "outputs": [
        {
          "name": "model",
          "type": "MODEL",
          "links": [
            1591
          ],
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "InstantIdAdapterApply"
      },
      "widgets_values": [
        0.8
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 359,
      "type": "KSampler",
      "pos": [
        1097,
        420
      ],
      "size": {
        "0": 316.94384765625,
        "1": 486.80694580078125
      },
      "flags": {},
      "order": 44,
      "mode": 0,
      "inputs": [
        {
          "name": "model",
          "type": "MODEL",
          "link": 1591
        },
        {
          "name": "positive",
          "type": "CONDITIONING",
          "link": 1584
        },
        {
          "name": "negative",
          "type": "CONDITIONING",
          "link": 1585
        },
        {
          "name": "latent_image",
          "type": "LATENT",
          "link": 1420
        }
      ],
      "outputs": [
        {
          "name": "LATENT",
          "type": "LATENT",
          "links": [
            1340
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "KSampler"
      },
      "widgets_values": [
        930143501615199,
        "randomize",
        30,
        3,
        "euler",
        "karras",
        0.7000000000000001
      ],
      "color": "#323",
      "bgcolor": "#535"
    },
    {
      "id": 471,
      "type": "Reroute",
      "pos": [
        229,
        784
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 10,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1015
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "CLIP",
          "links": [
            1324,
            1592
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 641,
      "type": "LoadImage",
      "pos": [
        -600,
        -460
      ],
      "size": {
        "0": 235.36587524414062,
        "1": 314
      },
      "flags": {},
      "order": 3,
      "mode": 0,
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            1460
          ],
          "slot_index": 0,
          "shape": 3
        },
        {
          "name": "MASK",
          "type": "MASK",
          "links": null,
          "shape": 3
        }
      ],
      "title": "Load face Referecnce",
      "properties": {
        "Node name for S&R": "LoadImage"
      },
      "widgets_values": [
        "z19900074AMP,Luke-Skywalker-i-Yoda---Gwiezdne-wojny--Imperium-k.jpg",
        "image"
      ],
      "color": "#223",
      "bgcolor": "#335"
    },
    {
      "id": 253,
      "type": "LoadImage",
      "pos": [
        -620,
        40
      ],
      "size": {
        "0": 290.3117370605469,
        "1": 314
      },
      "flags": {},
      "order": 4,
      "mode": 0,
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            1533,
            1542,
            1551,
            1556
          ],
          "slot_index": 0,
          "shape": 3
        },
        {
          "name": "MASK",
          "type": "MASK",
          "links": [
            1530,
            1557
          ],
          "slot_index": 1,
          "shape": 3
        }
      ],
      "title": "Load Pose Image",
      "properties": {
        "Node name for S&R": "LoadImage"
      },
      "widgets_values": [
        "HanSolo.webp",
        "image"
      ],
      "color": "#223",
      "bgcolor": "#335"
    },
    {
      "id": 241,
      "type": "CheckpointLoaderSimple",
      "pos": [
        -630,
        430
      ],
      "size": {
        "0": 295.705078125,
        "1": 310
      },
      "flags": {},
      "order": 5,
      "mode": 0,
      "outputs": [
        {
          "name": "MODEL",
          "type": "MODEL",
          "links": [
            1588
          ],
          "slot_index": 0,
          "shape": 3
        },
        {
          "name": "CLIP",
          "type": "CLIP",
          "links": [
            1015
          ],
          "slot_index": 1,
          "shape": 3
        },
        {
          "name": "VAE",
          "type": "VAE",
          "links": [
            1026
          ],
          "slot_index": 2,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "CheckpointLoaderSimple"
      },
      "widgets_values": [
        "custom_3.9.safetensors"
      ],
      "color": "#223",
      "bgcolor": "#335"
    },
    {
      "id": 319,
      "type": "ImageBlur",
      "pos": [
        1450,
        -10
      ],
      "size": {
        "0": 315,
        "1": 82
      },
      "flags": {
        "collapsed": false
      },
      "order": 40,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 558
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            563,
            590
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageBlur"
      },
      "widgets_values": [
        1,
        0.1
      ],
      "color": "#323",
      "bgcolor": "#535"
    },
    {
      "id": 673,
      "type": "AngleFromFace",
      "pos": [
        -303,
        -298
      ],
      "size": {
        "0": 315,
        "1": 194
      },
      "flags": {
        "collapsed": false
      },
      "order": 9,
      "mode": 0,
      "inputs": [
        {
          "name": "insightface",
          "type": "INSIGHTFACE_APP",
          "link": 1555
        },
        {
          "name": "image",
          "type": "IMAGE",
          "link": 1556
        },
        {
          "name": "mask",
          "type": "MASK",
          "link": 1557
        }
      ],
      "outputs": [
        {
          "name": "angle",
          "type": "FLOAT",
          "links": [
            1558,
            1559,
            1565
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "AngleFromFace"
      },
      "widgets_values": [
        "loseless",
        200,
        200,
        200,
        200
      ],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 682,
      "type": "Reroute",
      "pos": [
        522,
        475.47650853223274
      ],
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 20,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1593
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "VAE",
          "links": null
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    }
  ],
  "links": [
    [
      408,
      262,
      0,
      264,
      0,
      "IMAGE"
    ],
    [
      558,
      262,
      0,
      319,
      0,
      "IMAGE"
    ],
    [
      563,
      319,
      0,
      323,
      0,
      "IMAGE"
    ],
    [
      590,
      319,
      0,
      326,
      0,
      "IMAGE"
    ],
    [
      768,
      392,
      0,
      248,
      1,
      "VAE"
    ],
    [
      781,
      394,
      0,
      396,
      1,
      "IMAGE"
    ],
    [
      797,
      248,
      0,
      394,
      0,
      "IMAGE"
    ],
    [
      828,
      408,
      0,
      262,
      0,
      "MASK"
    ],
    [
      836,
      410,
      0,
      396,
      3,
      "INT"
    ],
    [
      838,
      411,
      0,
      396,
      4,
      "INT"
    ],
    [
      840,
      412,
      0,
      394,
      1,
      "INT"
    ],
    [
      841,
      413,
      0,
      394,
      2,
      "INT"
    ],
    [
      923,
      407,
      0,
      354,
      0,
      "IMAGE"
    ],
    [
      926,
      407,
      0,
      268,
      0,
      "IMAGE"
    ],
    [
      1015,
      241,
      1,
      471,
      0,
      "*"
    ],
    [
      1024,
      474,
      0,
      354,
      1,
      "VAE"
    ],
    [
      1026,
      241,
      2,
      474,
      0,
      "*"
    ],
    [
      1027,
      474,
      0,
      392,
      0,
      "*"
    ],
    [
      1032,
      389,
      0,
      396,
      0,
      "IMAGE"
    ],
    [
      1052,
      481,
      0,
      369,
      0,
      "IMAGE"
    ],
    [
      1314,
      569,
      0,
      389,
      0,
      "*"
    ],
    [
      1324,
      471,
      0,
      252,
      0,
      "CLIP"
    ],
    [
      1340,
      359,
      0,
      248,
      0,
      "LATENT"
    ],
    [
      1419,
      354,
      0,
      630,
      0,
      "LATENT"
    ],
    [
      1420,
      630,
      0,
      359,
      3,
      "LATENT"
    ],
    [
      1421,
      323,
      0,
      630,
      1,
      "MASK"
    ],
    [
      1460,
      641,
      0,
      642,
      1,
      "IMAGE"
    ],
    [
      1473,
      647,
      3,
      410,
      0,
      "*"
    ],
    [
      1474,
      647,
      4,
      411,
      0,
      "*"
    ],
    [
      1475,
      647,
      5,
      412,
      0,
      "*"
    ],
    [
      1476,
      647,
      6,
      413,
      0,
      "*"
    ],
    [
      1491,
      650,
      0,
      642,
      0,
      "INSIGHTFACE_APP"
    ],
    [
      1504,
      647,
      1,
      408,
      0,
      "*"
    ],
    [
      1530,
      253,
      1,
      664,
      0,
      "MASK"
    ],
    [
      1533,
      253,
      0,
      667,
      0,
      "IMAGE"
    ],
    [
      1535,
      667,
      0,
      395,
      0,
      "IMAGE"
    ],
    [
      1536,
      396,
      0,
      668,
      0,
      "IMAGE"
    ],
    [
      1538,
      668,
      0,
      667,
      1,
      "IMAGE"
    ],
    [
      1542,
      253,
      0,
      670,
      0,
      "IMAGE"
    ],
    [
      1544,
      670,
      0,
      579,
      0,
      "*"
    ],
    [
      1545,
      670,
      0,
      647,
      0,
      "IMAGE"
    ],
    [
      1546,
      670,
      0,
      569,
      0,
      "*"
    ],
    [
      1551,
      253,
      0,
      672,
      0,
      "*"
    ],
    [
      1552,
      672,
      0,
      258,
      0,
      "IMAGE"
    ],
    [
      1555,
      650,
      0,
      673,
      0,
      "INSIGHTFACE_APP"
    ],
    [
      1556,
      253,
      0,
      673,
      1,
      "IMAGE"
    ],
    [
      1557,
      253,
      1,
      673,
      2,
      "MASK"
    ],
    [
      1558,
      673,
      0,
      670,
      1,
      "FLOAT"
    ],
    [
      1559,
      673,
      0,
      669,
      1,
      "FLOAT"
    ],
    [
      1562,
      647,
      2,
      481,
      0,
      "*"
    ],
    [
      1563,
      664,
      0,
      669,
      0,
      "IMAGE"
    ],
    [
      1564,
      650,
      0,
      647,
      2,
      "INSIGHTFACE_APP"
    ],
    [
      1565,
      673,
      0,
      655,
      0,
      "*"
    ],
    [
      1566,
      655,
      0,
      668,
      1,
      "FLOAT"
    ],
    [
      1567,
      647,
      0,
      407,
      0,
      "*"
    ],
    [
      1572,
      665,
      0,
      647,
      1,
      "MASK"
    ],
    [
      1573,
      669,
      0,
      665,
      0,
      "IMAGE"
    ],
    [
      1576,
      642,
      0,
      679,
      1,
      "FACE_EMBED"
    ],
    [
      1577,
      677,
      1,
      679,
      0,
      "RESAMPLER"
    ],
    [
      1580,
      252,
      0,
      680,
      0,
      "CONDITIONING"
    ],
    [
      1581,
      287,
      0,
      680,
      1,
      "CONDITIONING"
    ],
    [
      1582,
      636,
      0,
      680,
      3,
      "CONTROL_NET"
    ],
    [
      1583,
      481,
      0,
      680,
      4,
      "IMAGE"
    ],
    [
      1584,
      680,
      0,
      359,
      1,
      "CONDITIONING"
    ],
    [
      1585,
      680,
      1,
      359,
      2,
      "CONDITIONING"
    ],
    [
      1587,
      679,
      0,
      680,
      2,
      "FACE_CONDITIONING"
    ],
    [
      1588,
      241,
      0,
      681,
      0,
      "MODEL"
    ],
    [
      1589,
      677,
      0,
      681,
      1,
      "INSTANTID_ADAPTER"
    ],
    [
      1590,
      679,
      0,
      681,
      2,
      "FACE_CONDITIONING"
    ],
    [
      1591,
      681,
      0,
      359,
      0,
      "MODEL"
    ],
    [
      1592,
      471,
      0,
      287,
      0,
      "CLIP"
    ],
    [
      1593,
      474,
      0,
      682,
      0,
      "*"
    ]
  ],
  "groups": [],
  "config": {},
  "extra": {
    "ds": {
      "scale": 0.8390545288824507,
      "offset": [
        556.6002572542378,
        -306.22141924192647
      ]
    },
    "groupNodes": {}
  },
  "version": 0.4
}

================================================
FILE: workflows/draw_kps.json
================================================
{
  "last_node_id": 658,
  "last_link_id": 1520,
  "nodes": [
    {
      "id": 268,
      "type": "PreviewImage",
      "pos": {
        "0": 1260,
        "1": -340
      },
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 28,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 926
        }
      ],
      "outputs": [],
      "title": "Image for inpaint",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "widgets_values": [],
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 412,
      "type": "Reroute",
      "pos": {
        "0": 1201,
        "1": 144
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 20,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1475,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            840
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 413,
      "type": "Reroute",
      "pos": {
        "0": 1201,
        "1": 164
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 21,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1476,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            841
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 395,
      "type": "PreviewImage",
      "pos": {
        "0": 1440,
        "1": 430
      },
      "size": {
        "0": 612.2093505859375,
        "1": 842.1597900390625
      },
      "flags": {},
      "order": 44,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 1418
        }
      ],
      "outputs": [],
      "title": "Output Image",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "widgets_values": [],
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 258,
      "type": "PreviewImage",
      "pos": {
        "0": 2100,
        "1": 430
      },
      "size": {
        "0": 612.114013671875,
        "1": 845.9668579101562
      },
      "flags": {},
      "order": 14,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 1281
        }
      ],
      "outputs": [],
      "title": "Input Image",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "widgets_values": [],
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 630,
      "type": "SetLatentNoiseMask",
      "pos": {
        "0": 894,
        "1": 510
      },
      "size": {
        "0": 210,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 37,
      "mode": 0,
      "inputs": [
        {
          "name": "samples",
          "type": "LATENT",
          "link": 1419
        },
        {
          "name": "mask",
          "type": "MASK",
          "link": 1421
        }
      ],
      "outputs": [
        {
          "name": "LATENT",
          "type": "LATENT",
          "links": [
            1420
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "SetLatentNoiseMask"
      },
      "widgets_values": []
    },
    {
      "id": 354,
      "type": "VAEEncode",
      "pos": {
        "0": 751,
        "1": 510
      },
      "size": {
        "0": 309.7555847167969,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 27,
      "mode": 0,
      "inputs": [
        {
          "name": "pixels",
          "type": "IMAGE",
          "link": 923
        },
        {
          "name": "vae",
          "type": "VAE",
          "link": 1024
        }
      ],
      "outputs": [
        {
          "name": "LATENT",
          "type": "LATENT",
          "links": [
            1419
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "VAEEncode"
      },
      "widgets_values": []
    },
    {
      "id": 569,
      "type": "Reroute",
      "pos": {
        "0": 206,
        "1": 313
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 8,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1415
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1314
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 474,
      "type": "Reroute",
      "pos": {
        "0": 416,
        "1": 478
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 12,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1026
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "VAE",
          "links": [
            1024,
            1027
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 248,
      "type": "VAEDecode",
      "pos": {
        "0": 1438,
        "1": 323
      },
      "size": {
        "0": 140,
        "1": 46
      },
      "flags": {
        "collapsed": true
      },
      "order": 41,
      "mode": 0,
      "inputs": [
        {
          "name": "samples",
          "type": "LATENT",
          "link": 1340
        },
        {
          "name": "vae",
          "type": "VAE",
          "link": 768
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            797
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "VAEDecode"
      },
      "widgets_values": []
    },
    {
      "id": 394,
      "type": "ImageScale",
      "pos": {
        "0": 1603,
        "1": 186
      },
      "size": {
        "0": 315,
        "1": 130
      },
      "flags": {
        "collapsed": true
      },
      "order": 42,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 797
        },
        {
          "name": "width",
          "type": "INT",
          "link": 840,
          "widget": {
            "name": "width"
          }
        },
        {
          "name": "height",
          "type": "INT",
          "link": 841,
          "widget": {
            "name": "height"
          }
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            781
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageScale"
      },
      "widgets_values": [
        "bilinear",
        512,
        512,
        "disabled"
      ]
    },
    {
      "id": 396,
      "type": "ImageCompositeMasked",
      "pos": {
        "0": 1840,
        "1": 149
      },
      "size": {
        "0": 327.45550537109375,
        "1": 140.86239624023438
      },
      "flags": {
        "collapsed": true
      },
      "order": 43,
      "mode": 0,
      "inputs": [
        {
          "name": "destination",
          "type": "IMAGE",
          "link": 1032
        },
        {
          "name": "source",
          "type": "IMAGE",
          "link": 781
        },
        {
          "name": "mask",
          "type": "MASK",
          "link": null,
          "shape": 7
        },
        {
          "name": "x",
          "type": "INT",
          "link": 836,
          "widget": {
            "name": "x"
          }
        },
        {
          "name": "y",
          "type": "INT",
          "link": 838,
          "widget": {
            "name": "y"
          }
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            1418
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageCompositeMasked"
      },
      "widgets_values": [
        0,
        0,
        false
      ]
    },
    {
      "id": 389,
      "type": "Reroute",
      "pos": {
        "0": 1708,
        "1": 311
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 15,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1314
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1032
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 392,
      "type": "Reroute",
      "pos": {
        "0": 756,
        "1": 293
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 25,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1027
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "VAE",
          "links": [
            768
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 262,
      "type": "MaskToImage",
      "pos": {
        "0": 1240,
        "1": 20
      },
      "size": {
        "0": 210,
        "1": 26
      },
      "flags": {
        "collapsed": true
      },
      "order": 29,
      "mode": 0,
      "inputs": [
        {
          "name": "mask",
          "type": "MASK",
          "link": 828,
          "slot_index": 0
        }
      ],
      "outputs": [
        {
          "name": "IMAGE",
          "type": "IMAGE",
          "links": [
            408,
            558
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "MaskToImage"
      },
      "widgets_values": []
    },
    {
      "id": 323,
      "type": "ImageToMask",
      "pos": {
        "0": 1790,
        "1": 20
      },
      "size": {
        "0": 315,
        "1": 58
      },
      "flags": {
        "collapsed": true
      },
      "order": 34,
      "mode": 0,
      "inputs": [
        {
          "name": "image",
          "type": "IMAGE",
          "link": 563
        }
      ],
      "outputs": [
        {
          "name": "MASK",
          "type": "MASK",
          "links": [
            1421
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "ImageToMask"
      },
      "widgets_values": [
        "red"
      ]
    },
    {
      "id": 264,
      "type": "PreviewImage",
      "pos": {
        "0": 1500,
        "1": -340
      },
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 31,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 408
        }
      ],
      "outputs": [],
      "title": "Mask",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "widgets_values": [],
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 579,
      "type": "Reroute",
      "pos": {
        "0": 1967,
        "1": 51
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 7,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1260
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            1281
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 410,
      "type": "Reroute",
      "pos": {
        "0": 1118,
        "1": 104
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 18,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1473,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            836
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 411,
      "type": "Reroute",
      "pos": {
        "0": 1118,
        "1": 124
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 19,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1474,
          "widget": {
            "name": "value"
          }
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "INT",
          "links": [
            838
          ]
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 369,
      "type": "PreviewImage",
      "pos": {
        "0": 1020,
        "1": -340
      },
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 38,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 1052
        }
      ],
      "outputs": [],
      "title": "InstantId Control Image",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "widgets_values": [],
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 326,
      "type": "PreviewImage",
      "pos": {
        "0": 1740,
        "1": -340
      },
      "size": {
        "0": 210,
        "1": 246
      },
      "flags": {},
      "order": 35,
      "mode": 0,
      "inputs": [
        {
          "name": "images",
          "type": "IMAGE",
          "link": 590
        }
      ],
      "outputs": [],
      "title": "Blurred Mask",
      "properties": {
        "Node name for S&R": "PreviewImage"
      },
      "widgets_values": [],
      "color": "#233",
      "bgcolor": "#355"
    },
    {
      "id": 407,
      "type": "Reroute",
      "pos": {
        "0": 1148,
        "1": 45
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 16,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1481
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "IMAGE",
          "links": [
            923,
            926
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 408,
      "type": "Reroute",
      "pos": {
        "0": 802,
        "1": 62
      },
      "size": [
        75,
        26
      ],
      "flags": {},
      "order": 17,
      "mode": 0,
      "inputs": [
        {
          "name": "",
          "type": "*",
          "link": 1470
        }
      ],
      "outputs": [
        {
          "name": "",
          "type": "MASK",
          "links": [
            828
          ],
          "slot_index": 0
        }
      ],
      "properties": {
        "showOutputText": false,
        "horizontal": false
      }
    },
    {
      "id": 650,
      "type": "LoadInsightface",
      "pos": {
        "0": -620,
        "1": -130
      },
      "size": {
        "0": 210,
        "1": 26
      },
      "flags": {},
      "order": 0,
      "mode": 0,
      "inputs": [],
      "outputs": [
        {
          "name": "insightface",
          "type": "INSIGHTFACE_APP",
          "links": [
            1491
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "LoadInsightface"
      },
      "widgets_values": [],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 642,
      "type": "FaceEmbed",
      "pos": {
        "0": -310,
        "1": -100
      },
      "size": {
        "0": 292.20001220703125,
        "1": 66
      },
      "flags": {
        "collapsed": true
      },
      "order": 6,
      "mode": 0,
      "inputs": [
        {
          "name": "insightface",
          "type": "INSIGHTFACE_APP",
          "link": 1491
        },
        {
          "name": "face_image",
          "type": "IMAGE",
          "link": 1460
        },
        {
          "name": "face_embeds",
          "type": "FACE_EMBED",
          "link": null,
          "shape": 7
        }
      ],
      "outputs": [
        {
          "name": "face embeds",
          "type": "FACE_EMBED",
          "links": [
            1498
          ],
          "slot_index": 0,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "FaceEmbed"
      },
      "widgets_values": [],
      "color": "#432",
      "bgcolor": "#653"
    },
    {
      "id": 652,
      "type": "LoadInstantIdAdapter",
      "pos": {
        "0": -377,
        "1": 805
      },
      "size": {
        "0": 291.64471435546875,
        "1": 78
      },
      "flags": {},
      "order": 1,
      "mode": 0,
      "inputs": [],
      "outputs": [
        {
          "name": "InstantId_adapter",
          "type": "INSTANTID_ADAPTER",
          "links": [
            1495
          ],
          "slot_index": 0,
          "shape": 3
        },
        {
          "name": "resampler",
          "type": "RESAMPLER",
          "links": [
            1496
          ],
          "slot_index": 1,
          "shape": 3
        }
      ],
      "properties": {
        "Node name for S&R": "LoadInstantIdAdapter"
      },
      "widgets_values": [
        "ip-adapter.bin"
      ],
      "color": "#432",
      "bgco
Download .txt
gitextract_s796ovar/

├── .github/
│   └── workflows/
│       └── publish_action.yml
├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── ip_adapter/
│   ├── instantId.py
│   └── resampler.py
├── nodes.py
├── pyproject.toml
├── requirements.txt
├── ui/
│   ├── dialogs.js
│   ├── extension.js
│   ├── helpers.js
│   ├── shaders.js
│   └── uiHelpers.js
├── utils.py
└── workflows/
    ├── auto_rotate.json
    ├── draw_kps.json
    ├── draw_kps_rotate.json
    ├── inpaint.json
    ├── promp2image.json
    ├── promp2image_detail_pass.json
    ├── prompts2img_2faces_enhancement.json
    ├── prop2image_latent_upscale.json
    ├── prop2image_latent_upscale_with_2d_randomizer.json
    ├── prop2image_latent_upscale_with_3d_and_2d_randomizer.json
    ├── prop2image_latent_upscale_with_3d_and_2d_randomizer_with_rotation.json
    ├── simple.json
    ├── simple_two_embeds.json
    ├── simple_with_adapter.json
    └── very_simple.json
Download .txt
SYMBOL INDEX (154 symbols across 7 files)

FILE: ip_adapter/instantId.py
  class InstantId (line 4) | class InstantId(torch.nn.Module):
    method __init__ (line 5) | def __init__(self, ip_adapter):
  class CrossAttentionPatch (line 17) | class CrossAttentionPatch:
    method __init__ (line 18) | def __init__(self, scale, instantId, cond, number):
    method set_new_condition (line 24) | def set_new_condition(self, scale, instantId, cond, number):
    method __call__ (line 30) | def __call__(self, q, k, v, extra_options):

FILE: ip_adapter/resampler.py
  function FeedForward (line 8) | def FeedForward(dim, mult=4):
  function reshape_tensor (line 17) | def reshape_tensor(x, heads):
  class PerceiverAttention (line 28) | class PerceiverAttention(nn.Module):
    method __init__ (line 29) | def __init__(self, *, dim, dim_head=64, heads=8):
    method forward (line 43) | def forward(self, x, latents):
  class Resampler (line 75) | class Resampler(nn.Module):
    method __init__ (line 76) | def __init__(
    method forward (line 107) | def forward(self, x):

FILE: nodes.py
  function proxy_handle (line 31) | async def proxy_handle(request):
  class FaceEmbed (line 85) | class FaceEmbed:
    method __init__ (line 86) | def __init__(self):
    method INPUT_TYPES (line 90) | def INPUT_TYPES(self):
    method make_face_embed (line 106) | def make_face_embed(self, insightface, face_image, face_embeds = None):
  class FaceEmbedCombine (line 123) | class FaceEmbedCombine:
    method __init__ (line 124) | def __init__(self):
    method INPUT_TYPES (line 128) | def INPUT_TYPES(self):
    method combine_face_embed (line 141) | def combine_face_embed(self, resampler, face_embeds):
  class AngleFromFace (line 149) | class AngleFromFace:
    method __init__ (line 151) | def __init__(self):
    method INPUT_TYPES (line 155) | def INPUT_TYPES(self):
    method get_angle (line 174) | def get_angle(
  class AngleFromKps (line 193) | class AngleFromKps:
    method __init__ (line 195) | def __init__(self):
    method INPUT_TYPES (line 199) | def INPUT_TYPES(self):
    method get_angle (line 212) | def get_angle(self, kps_data, rotate_mode):
  class ComposeRotated (line 224) | class ComposeRotated:
    method __init__ (line 225) | def __init__(self):
    method INPUT_TYPES (line 229) | def INPUT_TYPES(self):
    method compose_rotate (line 242) | def compose_rotate(self, original_image, rotated_image):
  class RotateImage (line 265) | class RotateImage:
    method INPUT_TYPES (line 267) | def INPUT_TYPES(self):
    method rotate_and_pad_image (line 281) | def rotate_and_pad_image(self, image, angle, counter_clockwise):
  class LoadInstantIdAdapter (line 291) | class LoadInstantIdAdapter:
    method __init__ (line 292) | def __init__(self):
    method INPUT_TYPES (line 296) | def INPUT_TYPES(self):
    method load_instantId_adapter (line 308) | def load_instantId_adapter(self, ipadapter):
  class InstantIdAdapterApply (line 328) | class InstantIdAdapterApply:
    method __init__ (line 329) | def __init__(self):
    method INPUT_TYPES (line 333) | def INPUT_TYPES(self):
    method apply_instantId_adapter (line 348) | def apply_instantId_adapter(self, model, instantId_adapter, face_condi...
  class ControlNetInstantIdApply (line 379) | class ControlNetInstantIdApply:
    method INPUT_TYPES (line 381) | def INPUT_TYPES(self):
    method apply_controlnet (line 398) | def apply_controlnet(self, positive, negative, face_conditioning, cont...
  class InstantIdAndControlnetApply (line 433) | class InstantIdAndControlnetApply:
    method __init__ (line 434) | def __init__(self):
    method INPUT_TYPES (line 438) | def INPUT_TYPES(self):
    method apply_instantId_adapter_and_controlnet (line 458) | def apply_instantId_adapter_and_controlnet(
  class PreprocessImageAdvanced (line 489) | class PreprocessImageAdvanced:
    method __init__ (line 493) | def __init__(self):
    method INPUT_TYPES (line 497) | def INPUT_TYPES(self):
    method preprocess_image (line 521) | def preprocess_image(
  class PreprocessImage (line 571) | class PreprocessImage(PreprocessImageAdvanced):
    method __init__ (line 572) | def __init__(self):
    method INPUT_TYPES (line 576) | def INPUT_TYPES(self):
    method preprocess_image_simple (line 594) | def preprocess_image_simple(self, image, mask, width, height, resize_m...
  class LoadInsightface (line 601) | class LoadInsightface:
    method __init__ (line 602) | def __init__(self):
    method INPUT_TYPES (line 606) | def INPUT_TYPES(self):
    method load_insightface (line 614) | def load_insightface(self):
  class KpsDraw (line 625) | class KpsDraw:
    method INPUT_TYPES (line 627) | def INPUT_TYPES(self):
    method draw_kps (line 644) | def draw_kps(self, width, height, kps, image_reference = None):
  class Kps3dFromImage (line 649) | class Kps3dFromImage:
    method INPUT_TYPES (line 651) | def INPUT_TYPES(self):
    method make_kps (line 668) | def make_kps(self, width, height, kps, image):
  class KpsMaker (line 676) | class KpsMaker:
    method INPUT_TYPES (line 678) | def INPUT_TYPES(self):
    method draw_kps (line 690) | def draw_kps(self, kps_data):
  class Kps2dRandomizer (line 700) | class Kps2dRandomizer:
    method INPUT_TYPES (line 702) | def INPUT_TYPES(self):
    method rand_kps (line 722) | def rand_kps(self, kps_data, seed, angle_min, angle_max, scale_min, sc...
  class Kps3dRandomizer (line 819) | class Kps3dRandomizer:
    method INPUT_TYPES (line 821) | def INPUT_TYPES(self):
    method rand_kps (line 837) | def rand_kps(self, kps_data_3d, seed, rotate_x, rotate_y, rotate_z):
  class Kps2dScaleBy (line 866) | class Kps2dScaleBy:
    method INPUT_TYPES (line 868) | def INPUT_TYPES(self):
    method scale_kps_by (line 881) | def scale_kps_by(self, kps_data, scale):
  class Kps2dScale (line 895) | class Kps2dScale:
    method INPUT_TYPES (line 897) | def INPUT_TYPES(self):
    method scale_kps (line 911) | def scale_kps(self, kps_data, width, height):
  class Kps2dRotate (line 928) | class Kps2dRotate:
    method INPUT_TYPES (line 930) | def INPUT_TYPES(self):
    method rotate_kps (line 944) | def rotate_kps(self, kps_data, angle, counter_clockwise):
  class Kps2dCrop (line 962) | class Kps2dCrop:
    method INPUT_TYPES (line 964) | def INPUT_TYPES(self):
    method crop_kps (line 980) | def crop_kps(self, kps_data, x, y, width, height):
  class MaskFromKps (line 996) | class MaskFromKps:
    method INPUT_TYPES (line 998) | def INPUT_TYPES(self):
    method creat_mask (line 1011) | def creat_mask(self, kps_data, grow_by):

FILE: ui/dialogs.js
  class KPSDialogBase (line 5) | class KPSDialogBase {
    method constructor (line 6) | constructor(w, h, img = undefined) {
    method initializeCanvasPanZoom (line 49) | initializeCanvasPanZoom () {
    method invalidatePanZoom (line 73) | invalidatePanZoom () {
    method setBasicControls (line 96) | setBasicControls () {
    method createZoomSlider (line 162) | createZoomSlider () {
    method drawMoveAll (line 170) | drawMoveAll () {
  class KPSDialog2d (line 191) | class KPSDialog2d extends KPSDialogBase{
    method constructor (line 192) | constructor(w, h, referenceImage, angleWidget, kpsJsonWidget) {
    method getDefaultKps (line 229) | getDefaultKps () {
    method setControls (line 241) | setControls () {
    method createOpacitySlider (line 248) | createOpacitySlider () {
    method attachListeners (line 256) | attachListeners () {
    method closeModal (line 271) | closeModal () {
    method save (line 275) | async save () {
    method changePointsPosition (line 309) | changePointsPosition(closer = false, step = 10) {
    method mouseDown (line 349) | mouseDown (event) {
    method mouseMove (line 389) | mouseMove (event) {
    method mouseUp (line 425) | mouseUp (event) {
    method wheel (line 433) | wheel (event) {
    method draw (line 450) | draw () {
    method drawKeyPoints (line 456) | drawKeyPoints (canvas = this.canvas) {
    method drawImage (line 460) | drawImage (ref_image) {
    method drawImageWebGL2 (line 474) | drawImageWebGL2 (gl, image) {
  class KPSDialog3d (line 526) | class KPSDialog3d extends KPSDialogBase{
    method constructor (line 527) | constructor(w, h, angleWidget, kpsJsonWidget) {
    method getDefaultKps (line 564) | getDefaultKps () {
    method setControls (line 581) | setControls () {
    method createRotationXSlider (line 590) | createRotationXSlider () {
    method createRotationYSlider (line 598) | createRotationYSlider () {
    method createRotationZSlider (line 606) | createRotationZSlider () {
    method attachListeners (line 614) | attachListeners () {
    method closeModal (line 629) | closeModal () {
    method save (line 633) | async save () {
    method changePointsPosition (line 663) | changePointsPosition(closer = false, step = 10) {
    method mouseDown (line 706) | mouseDown (event) {
    method mouseMove (line 740) | mouseMove (event) {
    method mouseUp (line 774) | mouseUp (event) {
    method wheel (line 782) | wheel (event) {
    method draw (line 799) | draw () {
    method drawLandmarks (line 809) | drawLandmarks (p, canvas = this.canvas) {
    method drawKeyPoints (line 824) | drawKeyPoints (canvas = this.canvas) {

FILE: ui/extension.js
  method getCustomWidgets (line 8) | getCustomWidgets(app) {
  method beforeRegisterNodeDef (line 24) | async beforeRegisterNodeDef(nodeType, nodeData, app) {

FILE: ui/helpers.js
  function rotateX (line 165) | function rotateX(point, angle) {
  function rotateY (line 175) | function rotateY(point, angle) {
  function rotateZ (line 185) | function rotateZ(point, angle) {

FILE: utils.py
  function draw_kps (line 10) | def draw_kps(w, h, kps, color_list=[(255, 0, 0), (0, 255, 0), (0, 0, 255...
  function set_model_patch_replace (line 47) | def set_model_patch_replace(model, patch_kwargs, key):
  function resize_to_fit_area (line 66) | def resize_to_fit_area(original_width, original_height, area_width, area...
  function get_mask_bbox_with_padding (line 85) | def get_mask_bbox_with_padding(mask_image, pad_top, pad_right, pad_botto...
  function get_kps_from_image (line 105) | def get_kps_from_image(image, insightface):
  function get_angle (line 113) | def get_angle(a=(0, 0), b=(0, 0), round_angle=False):
  function calculate_size_after_rotation (line 123) | def calculate_size_after_rotation(width, height, angle):
  function image_rotate_with_pad (line 132) | def image_rotate_with_pad(image, clockwise, angle):
  function kps_rotate_2d (line 143) | def kps_rotate_2d(points, original_width, original_height, new_width, ne...
  function kps_rotate_3d (line 174) | def kps_rotate_3d(points, angleXDeg, angleYDeg, angleZDeg):
  function kps3d_to_kps2d (line 218) | def kps3d_to_kps2d (kps):
  function get_bbox_from_kps (line 234) | def get_bbox_from_kps (kps_data, grow_by):
Condensed preview — 31 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,046K chars).
[
  {
    "path": ".github/workflows/publish_action.yml",
    "chars": 460,
    "preview": "name: Publish to Comfy registry\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - main\n    paths:\n      - \"pyprojec"
  },
  {
    "path": ".gitignore",
    "chars": 3078,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 23280,
    "preview": "# ComfyUI InstantID FaceSwap v0.1.1\n<sub>[About](#comfyui-instantid-faceswap-v011) | [Installation guide](#installation-"
  },
  {
    "path": "__init__.py",
    "chars": 172,
    "preview": "from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS\nWEB_DIRECTORY = './ui/'\n\n__all__ = ['NODE_CLASS_MAPPI"
  },
  {
    "path": "ip_adapter/instantId.py",
    "chars": 1494,
    "preview": "import torch\nfrom comfy.ldm.modules.attention import optimized_attention\n\nclass InstantId(torch.nn.Module):\n  def __init"
  },
  {
    "path": "ip_adapter/resampler.py",
    "chars": 3116,
    "preview": "# modified from https://github.com/mlfoundations/open_flamingo/blob/main/open_flamingo/src/helpers.py\nimport math\n\nimpor"
  },
  {
    "path": "nodes.py",
    "chars": 34701,
    "preview": "import os\nimport cv2\nimport torch\nimport numpy as np\nimport json\nimport comfy.utils\nimport folder_paths\nfrom urllib.pars"
  },
  {
    "path": "pyproject.toml",
    "chars": 498,
    "preview": "[project]\nname = \"comfyui-instantid-faceswap\"\ndescription = \"Implementation of [a/faceswap](https://github.com/nosiu/Ins"
  },
  {
    "path": "requirements.txt",
    "chars": 27,
    "preview": "insightface\nonnxruntime-gpu"
  },
  {
    "path": "ui/dialogs.js",
    "chars": 27158,
    "preview": "import { createShader, vertexShaderSrc, fragmentShaderSrc } from \"./shaders.js\"\nimport { getPointsCenter, drawKps, check"
  },
  {
    "path": "ui/extension.js",
    "chars": 9511,
    "preview": "import { app } from \"../../scripts/app.js\";\nimport { api } from \"../../scripts/api.js\";\nimport { drawKps, normalizePoint"
  },
  {
    "path": "ui/helpers.js",
    "chars": 6030,
    "preview": "export const getPointsCenter = (points) => {\n  let sumX = 0, sumY = 0, sumZ = 0;\n  points.forEach(([x, y, z]) => {\n     "
  },
  {
    "path": "ui/shaders.js",
    "chars": 820,
    "preview": "export const vertexShaderSrc = `#version 300 es\n#pragma vscode_glsllint_stage: vert\nlayout(location=0) in vec4 aPosition"
  },
  {
    "path": "ui/uiHelpers.js",
    "chars": 3411,
    "preview": "export const createSlider = (name, id, min, max, step, value, callback) => {\n    const divElement = document.createEleme"
  },
  {
    "path": "utils.py",
    "chars": 7981,
    "preview": "import numpy as np\nimport cv2\nimport math\nimport torch\nimport math\nimport torch.nn.functional as F\nfrom torchvision.tran"
  },
  {
    "path": "workflows/auto_rotate.json",
    "chars": 45690,
    "preview": "{\n  \"last_node_id\": 682,\n  \"last_link_id\": 1593,\n  \"nodes\": [\n    {\n      \"id\": 268,\n      \"type\": \"PreviewImage\",\n     "
  },
  {
    "path": "workflows/draw_kps.json",
    "chars": 43272,
    "preview": "{\n  \"last_node_id\": 658,\n  \"last_link_id\": 1520,\n  \"nodes\": [\n    {\n      \"id\": 268,\n      \"type\": \"PreviewImage\",\n     "
  },
  {
    "path": "workflows/draw_kps_rotate.json",
    "chars": 56357,
    "preview": "{\n  \"last_node_id\": 681,\n  \"last_link_id\": 1581,\n  \"nodes\": [\n    {\n      \"id\": 268,\n      \"type\": \"PreviewImage\",\n     "
  },
  {
    "path": "workflows/inpaint.json",
    "chars": 18795,
    "preview": "{\n  \"last_node_id\": 815,\n  \"last_link_id\": 1936,\n  \"nodes\": [\n    {\n      \"id\": 798,\n      \"type\": \"VAEEncode\",\n      \"p"
  },
  {
    "path": "workflows/promp2image.json",
    "chars": 22786,
    "preview": "{\n  \"last_node_id\": 803,\n  \"last_link_id\": 1917,\n  \"nodes\": [\n    {\n      \"id\": 636,\n      \"type\": \"ControlNetLoader\",\n "
  },
  {
    "path": "workflows/promp2image_detail_pass.json",
    "chars": 53413,
    "preview": "{\n  \"last_node_id\": 845,\n  \"last_link_id\": 2005,\n  \"nodes\": [\n    {\n      \"id\": 650,\n      \"type\": \"LoadInsightface\",\n  "
  },
  {
    "path": "workflows/prompts2img_2faces_enhancement.json",
    "chars": 80059,
    "preview": "{\n  \"last_node_id\": 806,\n  \"last_link_id\": 1916,\n  \"nodes\": [\n    {\n      \"id\": 642,\n      \"type\": \"FaceEmbed\",\n      \"p"
  },
  {
    "path": "workflows/prop2image_latent_upscale.json",
    "chars": 71774,
    "preview": "{\n  \"last_node_id\": 871,\n  \"last_link_id\": 2058,\n  \"nodes\": [\n    {\n      \"id\": 650,\n      \"type\": \"LoadInsightface\",\n  "
  },
  {
    "path": "workflows/prop2image_latent_upscale_with_2d_randomizer.json",
    "chars": 73366,
    "preview": "{\n  \"last_node_id\": 874,\n  \"last_link_id\": 2062,\n  \"nodes\": [\n    {\n      \"id\": 650,\n      \"type\": \"LoadInsightface\",\n  "
  },
  {
    "path": "workflows/prop2image_latent_upscale_with_3d_and_2d_randomizer.json",
    "chars": 88738,
    "preview": "{\n  \"last_node_id\": 877,\n  \"last_link_id\": 2067,\n  \"nodes\": [\n    {\n      \"id\": 650,\n      \"type\": \"LoadInsightface\",\n  "
  },
  {
    "path": "workflows/prop2image_latent_upscale_with_3d_and_2d_randomizer_with_rotation.json",
    "chars": 100655,
    "preview": "{\n  \"last_node_id\": 895,\n  \"last_link_id\": 2096,\n  \"nodes\": [\n    {\n      \"id\": 650,\n      \"type\": \"LoadInsightface\",\n  "
  },
  {
    "path": "workflows/simple.json",
    "chars": 35112,
    "preview": "{\n  \"last_node_id\": 651,\n  \"last_link_id\": 1489,\n  \"nodes\": [\n    {\n      \"id\": 369,\n      \"type\": \"PreviewImage\",\n     "
  },
  {
    "path": "workflows/simple_two_embeds.json",
    "chars": 37186,
    "preview": "{\n  \"last_node_id\": 653,\n  \"last_link_id\": 1492,\n  \"nodes\": [\n    {\n      \"id\": 369,\n      \"type\": \"PreviewImage\",\n     "
  },
  {
    "path": "workflows/simple_with_adapter.json",
    "chars": 36416,
    "preview": "{\n  \"last_node_id\": 651,\n  \"last_link_id\": 1488,\n  \"nodes\": [\n    {\n      \"id\": 369,\n      \"type\": \"PreviewImage\",\n     "
  },
  {
    "path": "workflows/very_simple.json",
    "chars": 33470,
    "preview": "{\n  \"last_node_id\": 634,\n  \"last_link_id\": 1450,\n  \"nodes\": [\n    {\n      \"id\": 369,\n      \"type\": \"PreviewImage\",\n     "
  }
]

About this extraction

This page contains the full source code of the nosiu/comfyui-instantId-faceswap GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 31 files (908.4 KB), approximately 272.1k tokens, and a symbol index with 154 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!